Does MOG replace current Check-in / Check-out style source code programs like Perforce, SourceSafe, CVS?
No! MOG only tracks game data files not the original source files. MOG does have built in version control which is only used for tracking the exported and processed game files. MOG gracefully works with existing version control solutions.
Does MOG handle Source Code?
No. MOG was specifically designed for data not C, C++, C# or other source code types. Programmers will continue to use whatever solutions they prefer for tracking their source code.
How isolated is the MOG server when installed?
The MOG Server only manages the delivery of commands between the Clients, Slaves and Editors. This is performed over open TCP connections on a port that can be defined by the user. One important thing to note about the MOG Server is that it is completely mobile. It can be moved and launched on any other machine including a normal XP workstation because the Server and Client obtain their relationship via the MOG Repository. Having a mobile MOG Server ensures that the team can avoid potential downtime in the event of hardware failures as long as the MOG Repository is not on the machine experiencing the hardware failure.
When and where would an artist be required to switch from their primary editing tool (Max, Maya, Photoshop) into MOG?
The MOG experience should start when the artist exports his data. The processing power of MOG allows assets to be broken up into smaller more manageable pieces and then individually processed by multiple network machines. This MOG process natively supports multi-platform processing so the complexities of multi-platform development are completely masked from the artists. Artists stay within the MOG environment while they test/examine their recent changes in their own local workspace before blessing the asset to the rest of the team. In fact, some MOG integrations can be so seamless that artists can completely bypass the MOG Client all together and simply switch between their primary editing tool and the game editor.
How can an artist answer the question - "did my latest change make it in to this build?
First of all, artist will be able to immediately see, change, and interact with their modifications within their own local workspaces. Before a change is sent to the rest of the project, the artist must first bless that asset from their inbox. Since MOG is a latent system there could be some very predictable delays for blessed assets if they need to be packaged. Artists can monitor the status of their blessed assets in their Sent items. Artists will always know the state of their blessed assets by simply checking their sent items folder.
Who and how do you equate game asset files with the source files that created them?
MOG assets can have properties imported along with them that can indicate anything the project wants to retain, such as, special exporting options, the filename of the original source file that generated the export, the version of the source editing application, the timestamps of exporting DLL's, etc... It is important to mention that MOG assets don't always have to be a result of an export. This is entirely up to the project to define what kind of data they want to send through the MOG pipeline. For example, some of our clients import the original PSD files because they have rippers that can read the native application file format and didn't need an intermediary exported format when processing the data.
What should a build wrangler *not* write as a command-line launch from primary tool (Maya/Max)?
He should not do any data munging in the exportation process. This needs to be saved for the MOG process so that assets can be reprocessed within MOG by changing rippers without ever having to go back to source application. MOG wants the least amount of munged data as possible so that wide sweeping data format changes can be performed within MOG. MOG is all about faster distributed network processing instead of stalling the artist while they wait for slow intense exporters to munge data on their local machines.
Starting from a blank project what does a build wrangler need to write to maximize his MOG experience?
Initially, the team can simply import their project's binaries and use MOG for nothing more than a data delivery mechanism. Slowly, as the it begins to make more sense, the tools programmers can begin exporting data directly into MOG and performing very targeted data munging on those assets. As the project progresses, more complex ripping and packaging can be adopted as well as additional platforms.
What does a tool programmer need to write?
Rippers are the most customized component in MOG. Whether it be lots of little simple rippers or one large all encompassing ripper this is entirely up to the tools programmer and how their past pipelines may have worked. Often times, the processing of an asset is nothing more than sending it through a series of already built adhoc tools which have already been written and most likely already in use. These tools would be things like image reducers, polygon filters, polygon stripping, image blenders, etc. These tools are usually precisely targeted to one thing and highly optimized for their project's specific needs. Depending on the team's needs, additional time could be spent integrating MOG directly into their existing custom applications and editors making it even easier for their content creators.
What type of processing happens within MOG as opposed to the source editing tool?
Preferably no processing should happen in the source tool. (i.e. Maya/Max exporters) MOG rippers are the preferred spot for data munging because they support network distributed ripping allowing programmers to perform wide sweeping changes to data formats in a timely manner without having to bother the artists or re-export assets from the original editing tool.
Let's break this up for a minute and discuss the advantages and disadvantages of munging data in exporters.
In summary, based on our game development experience games must go through multiple rigorous and strenuous optimization phases. Unlike applications, all of this optimization happens at the asset level. In our opinion, designing a pipeline where changes can't be made without having to return to the source application is designing a pipeline of failure.