Tuesday, December 31, 2013

REGARDING THE FUTURE OF C# ...

I'll be out of office for a few days and before I go let me post some words about the features I'd like to see implemented in upcoming versions of C# (btw, on my return I'll be posting again about The APE).

A couple of weeks ago on "This Week On Channel 9" series there was a reference to Mads Torgensen's presentation in London regarding "The Future of C#", announcing new features that could probably get implemented in C# 6.0.

So, in this post, let me explain some of the features that I hope they implement in C# in the short/middlle run:

1. LLVM-Like Compiler For Native Code


I talked about this many times, but I think it's a perfect time to mention it again.

So far, if you want to compile MSIL to native code at once, you can use a tool called NGen, which creates a native image of the code for the machine where compilation is being done. The problem with this tool is that its main purpose is to reduce startup times. Meaning? You won't get optimized bits for the whole code; just for the blocks first executed when the program starts.

Imho, we need more ... in this relatively new world of app marketplaces it'd be really handy to count on a model where you can deliver optimized native bits to the device/console/machine where the app would be downloaded and installed, don't you think?

Picture it like this: say you create an app/game with C# for the XBox One (using portable assemblies or not) and compile your source code to MSIL. Since the hardware of the console is the same in terms of main components (processor, memory, etc.) then why not compiling the whole MSIL code to native bits optimized for the Xbone console at once? (either on your side or on MSFT servers' one)

With a LLVM-Like compiler this could be achieved and extended to other platforms. But wait a minute! Isn't it what MSFT is doing for WP8? It sounds like it. But wait a minute, again! Isn't it something like the AOT compilation that can be found in the Mono Framework? If the latter gives optimized bits for whole assemblies per platform then it is!

In fact, many sources mentioned the so colled "Project N", which would be used to speed up Windows 8.1 apps. What is more, a few sources also mention that MSFT is working in a common compiler for C++/C#. I hope it also brings a more performing way to do interop stuff with C++.

True or not, this is a "must have" in order to end the C++/C# perf discussion!

2. "Single Instruction, Multiple Data" (SIMD)


In modern HW architecture, SIMD has become a standard when you want to boost performance in specific operations, in particular, ("vectorized") mathematic ones.

As a mather of fact, C++ counts with DirectXMath libraries (based on the formerly called XnaMath ones) which do implement SIMD, but unfortunately do not support C#.

Again, SIMD is already present in the Mono Framework for Math operations, so why not adding it to the .NET Framework once and for all?

I hope MSFT listen to us ...

3. Extension Properties


We have extension methods, so this is a corolary of it. Today, you can only implement getters (and setters) like this:

   public static string NameToDisplayGetter (this IPerson person)
   {
      ...
   }

Then, why not having something like this?

   public static string NameToDisplay: IPerson person
   {
      get { ... } // You could also add a setter, if needed.
   }

Of course that the syntax in the example above may vary, so I guess you get the idea here. There are several use cases where a feature like this could come handy, including MVVM or plain INotifyPropertyChanged ones.

4. Generic Enum Contraints


Generics is one of my favorite .NET features. There's lot of things that can be achieved through it but it has still room for improvement.

One of the things to improve are constraints. So far, when dealing with types of software elements, we have only two: class and struct. So, what about enums?

Currently, if you want to mimic an enum constraint you will have to write something like ...

   public void OperationX(TEnum myEnum)
      where TEnum: struct, IComparable, IFormattable, IConvertible ... and so on so forth.
   {
      … usual stuff …
   }

... and also, given that you are dealing with a subset of elements that approximate to an enum, you need to check whether and enum has been passed, generally throwing an exception if not:

    if (!typeof(TEnum).IsEnum)
   {
      throw new ArgumentException("The passed type is not an enum");
   }

Why not simplify it to something like this?

   public void OperationX(TEnum myEnum)
      where TEnum: enum
   {
      … usual stuff …
   }

Not only it makes sense, but also would simplify things a lot as well as open a wide range of handy operations and extension methods.

5. NumericValueType Base Class


.NET's CLR treats structs in a special way, even though they have a base class: ValueType.

I'll not be explaining here the characteristics of built-in primitives and structs; instead, I'll ask the following question: in the current version of C# can we achieve something like this ...?

   TNumeric Calculate(TNumeric number1, TNumeric number2, TNumeric number3)
     where TNumeric: struct
   {
       return number1 + number2 * number3;
   }

The answer: not without proper casts. So, a life changer for this type of situations would be to add a specification of ValueType that enjoys the benefits of structs and also supports basic math operations without any kind of wizardry: NumericValueType.

With that class and a new reserved word like, say, "numeric", "number" or "primitive", we could write generic operations and extension methods with a syntax as simplier as:

   TNumeric Calculate(TNumeric number1, TNumeric number2, TNumeric number3)
     where TNumeric: numeric
   {
       return number1 + number2 * number3;
   }

How about declaring new type of numbers? Easy ...

   public numeric Percentage
   {
      … usual stuff …
   }

... or ...

   public numeric Half
   {
      … usual stuff …
   }

No need to specify "struct" since "numeric" would be a value type that supports basic math operations (that we would need to implement when we declare the type, maybe, by overriding some base operations), and so in common scenarios there would be no need to cast values to do math.

6. Declaration of Static Operations On Interfaces


Put simply: having the possibility of declaring static operations when writing interfaces; like this:

   public interface IMyInterface
   {
      static void DoStaticOp();

      static bool IsThisTrue { get; }

      ... instance properties and operations ...
   }

This presents a challenge to both, polymorphic rules and abstract declarations, that is, at a "static" level. But as usual, with the proper rationale and care when modifying the language specs, it could be achieved. Maybe, many of you would be asking: "why bother?" but believe me when I say that I happened to meet situations where static operations on intefaces would have come really handy.

Well, this is it for today. What would you guys like to see implemented in C# 6 and beyond? Comments are welcome.

See ya when I get back,
~Pete

Thursday, December 19, 2013

THE ASSET PIPELINE EDITOR - PART 6

After a short break to do some marketing about the APE (sigh!), it's time to resume posting so the turn goes to a couple of recent additions I implemented into the core functionality:
  • "Direct" build (what?), and
  • Asset Packagers (oh yeah, baby!)
These features have been requested by some of you, so here they are ...

Direct Builds


In part 4 of the series I mentioned that there were two actions when compiling assets, the first one was "build" and the second one "copy/move", and the way to indicate whether to copy or move assets was by (un)checking the following checkbox:


Well, as a corollary of the command-line tool "APEBuild", a new third option may take place: direct build. Now you can indicate the APE to compile assets directly on the output folder instead of doing it on the "local" folder. In short, it's a way to reduce steps. No need for move or copy.



So, you will have three options from now on: "Copy Assets", "Move Assets" and "Direct Build". Needless to say that the "Default Copy Action" field ("Copy Always", "Copy If Newer" and "None") will be only enabled when "Direct Build" is disabled and that this feature can also be used with APEBuild.

In the picture above, if you look at it carefully, you will have a preview of what I'll talk next ...

Asset Packagers


Throughout the years, as a moderator in the creators' forums, I read many questions regarding the possibility of zipping your whole content folders. Recently, it has been brought to my attention (thanks again Javier) that some authorware offers the possibility of compressing all output into a single zip file.

Guess what? Now you can also do it with the APE!

How come? Simple ... you create your own packager (where you can use any of the compression techniques available in .NET 4.5, use third-party libraries or add your-own compressor) and when you plug it into the APE, the latter will show it as an option on the General Settings tab:


By default, as usual, there is a "pass-through" packager built-in, named "No Packaging", which we can use for the cases where no zipping is required. 

So, when you create/load a solution, its panel will look like this:



We can then select the packer along with the writer to set as the default one for the solution (which of course, can be overriden per project). But that is not it ...


The picture above shows a new field to set in the Project Settings tab: "Pack To". By default, that field will initially equal the path assigned to the "Copy To" field, that is, the output path.  But you can change it, in case you need to have a different target directory for the zipped file.

This opens a new set of possibilities since selecting a packager won't disabled copy/move operations, and vice versa. So, if you decide to copy assets and also create one huge compressed files for the whole structure, you can set a different path for the packed files and presto! Do you want just the packed files? No problem, set either the direct build or activate copy/move actions with "None", and you will only get the zipped file.

As I say, you can build, build+copy/move, build+package, build+copy/move+package, direct build, direct build + package. Pick the right combination for your project's need.

Please notice that the packaging action is meant to apply to a whole structure of assets and not to each asset individually; in other words, alike assets writers -which affect each raw file individually as part of the import/export process even though they are applied per project, asset packagers affect a structure of folders and asset files per project. Say you have a folder named "Content" with all the asset files built in the last execution of the command, then this folder is the one the packager will take as a reference to create and output the compressed file (or the compressed files you decide to output).


Last but not least, the "Writers" tab has been renamed to "Output Providers" given the fact that it now includes a configuration panel for packagers, where you can select the packager to use per compilation profile and configure its properties, if any is available.

In the example above, no packaging is set for the Windows target platform and the "Debug" compilation profile. Plus, since this default packager is a pass-thorugh one, there are no public properties you can tweak.

And again, this feature is automatically included in APEBuild!

To sum up ...


With these two additions, the APE now covers several use cases: from usual ones to the weirdest! Thus, I am eager to see what you guys come up with when the first released version of the APE gets launched ...

Btw, I continue working on this handy solution as we speak, so new and exciting features are added on a daily basis.

We are close to start the campaign at IndieGoGo so I hope you stick around!!!

'till next post,
~Pete

Friday, December 13, 2013

THE ASSET PIPELINE EDITOR - PART 5

In this part of the series I'll answer some of the questions that some of you've been asking me lately, in particular:

  • Does the APE watch source files?
  • Is there a command-line version of it? And
  • Why not publish it as open-source?
I attempted to give brief answers to a couple of them in this thread at GameDev, and also by email to the guys behind the WaveEngine, but I think they deserve a post here with further details.

So, let's begin ...

Does the APE watch source files?


To aswer this question, I need to explain what happens when a new solution is created and saved.

Basically, after saving a solution you will find the following structure on disk:

   + Root Path
      + [Solution's Name] folder
         - [SolutionFileName].fps
         + "Sources" folder
             - sourcefile1
             ...
             - sourcefileN
         + "Projects" folder
            + [Platform1's Name] folder
                - [ProjectFileName1].fpp
               + "Builds" folder
                  + [Profile1] folder
                     - output.fpb
                    + Content folder
             ...
            + [PlatformN's Name] folder
                - [ProjectFileNameN].fpp
               + "Builds" folder
                  + [Profile1] folder
                     - output.fpb
                    + Content folder

The APE creates a folder named "Sources" which will be used as a "local" repository for the whole solution. Within it, you will only find files (no folders). Thus, when you add a new file to any of your projects, the APE will copy that source file to the respository and create the corresponding raw file to the strcuture of your solution.

Following this rationale there is no need to dynamically watch file changes. Why? Simple, if you manually change one of the source files directly then the next time you build content that source file will be used to build assets provided it complies with the condition indicated for building: Always|New|None.

In other words, the APE watch changes over an existing source file only at the moment that new builds are requested. If a project is marked as "Build Always" then no matter what, all included raw files will generate a new asset file. If a project is marked as "Build If New" then only raw files with new source files assigned will end up having a new asset file. Finally, "None" will exclude the project from the build process.

Now, there was a second part in the question posted on GameDev's forum with had to do more with the processing-side of things than with what I've explained above.

The APE will NOT replace production tools like Photoshop, Sound Forge, and so on so forth. So you will need to create your source files there: jpegs, wavs, mov, etc.

What the APE provides is a way to indicate how to process source files to get the file format you need for your games. In case the built-in import/write units or the ones provided later on by me and or any other user are not useful to you, then you can implement your own with full control over them.

So, if you guys want to implement a processor that converts WAVs into OGGs, you can go ahead and do it with ease. What about resizing a texture? Sure. What else? Everything you can imagine of that can be achieved by setting parameters on a property grid.

For example, for the case of XNA'ers, in part 4 of the series I showed a processor with many features that pre-multiply alpha, resize textures, change formats and so on.

So to sum up this part of the question, to create source files you will need to use production tools. But to import them to your games with as asset files with given format, you can use the APE.

Is there a command-line version of it?


Yes, there is! And it's name is "APEBuild" (thanks Javier for suggesting the name!).

When I designed the APE I took into consideration server-side-like use cases. As a matter of fact, it resulted as a corolary when I develop the base test assembly for import/write units (please refer to part 1 to see an image of it).

In the current state of this command-like tool, only two actions can be executed: either you build an entire solution or only a set of projects. Let's have a look of the structure ...


The picture above shows what you get when execute the tool with no parameters (and also with wrong parameters).

So, if you want to build a solution, just execute the tool with one parameter: the path to the solution file. And if you want to build some of the projects in the solution, then add "p:" as an argument, followed by the names of the target platforms, separated by a comma. See the example in that picture.

Now, there are a few restrictions: first, the solution filename must always end with ".fps"; second, the tool will handle trimmed versions of the platforms' names; third, all passed platforms must exist in the solution or the tool won't execute; and fourth, when you pass a relative path to the solution, the path to the folder where APEBuild is located will be considered the root folder in order to build the absolute path.

So, when we execute the command for the entire solution we have been using as an example on the series, this is the result for a successful build:


And when you execute the command for a couple of pojects, the result would be the following:


You will also get messages in case of warnings and exceptions:


The above picture shows a warning indicating that an import unit that should have been plugged into the tool as an add-on is missing. However, since it's not used during the build process, the latter runs normally.

So, it is important to remember that:

1. Before executing the command you will need to check that all the import/write units are present in the corresponding folders associated to APEBuild (as you would also do with the APE's editor), and

2. When you commit source files, you will also need to commit the updated versions of the APE's solution/project files to the server or otherwise you won't get the results you were most-likely looking for.

There are some features I'd like to add in future versions like, say, verbosity control (that is, the level of detail you get as output), but the tool gets the job done in its current state, what is really handy!

Why not publish it as open-source?


I'd have prefered to address this question more close to the campaign's launch date, but since a few of you've asked this question recently, I decided to answer it now.

But before moving forward, I'd like to state that I will neither argue nor open a discussion regarding whether open source is good or bad business-wise, since that depends on factors whose relevance may vary per person (yes, "YMMV") and therefore not only does it lay beyond the scope of this post but also I don't feel like pursuing a Phyrric Victory.

Instead, I'll be posting a few words explaining my decision to publish it as a commercial tool in the near future -that is, provided the campaign at IndieGoGo succeeds.

Honestly, I haven't decided yet the price for a license but I intend to license the APE per seat per platform per major version. Yes, if you buy one license at launch, you will be able to use it for the whole v1.x! No annual subscriptions, no different versions for indies/pros, no complications to anyone.

Now, although the price is not yet decided, believe me when I say that it will be low and even lower for those of you who decide to contribute to the campaign at IndieGoGo. I'm an indie, so I know what it feels not being able to afford licenses from time to time. So during the campaign it'd be like going out with some folks to a movie theatre, say, on Friday's night.

So, why this decision?

First, open souce is difficult to keep alive in time. You need to coordinate efforts, check contributor's code, handle branches, even maybe, at some point and to some extent, include contributors into the decision-making process.

But that is not it, most of the people in the team would likely have daytime jobs, so development of updates to the tool would be done during the night provided the is some spare time left. Going to college? Does your job demand most of your productive time? Have a wife/husband? How about some kids? Then, you know the drill ...

It's not a surprise that many open source APIs and tools eventually follow the commercial route, or that their owners publish a letter indicating why the cannot continue working on it or that updates will slow down. It's completely understandable! There's lot of time, effort and even money put into it, and even though donations could be received, the latter eventually end up being not enough to even cover costs of production. Not to mention, costs of living.

So, instead of trying the open source route first to then follow the commercial one, I prefer to skip that part and commercialize licenses of the APE from square one. Succeeding in this task will assure the continuation of the tool since I will dedicate, not my spare time, but my production one to make it happen. And if I fail I'll continue to use it as is for inhouse projects. No hard feelings.

I have one more thing to add in this respect as an example: the guys behind the Mono Framework started the project as a non-profitable endeavour. But then, they realized that in order to continue offering the products they loved to develop, a change in course was imminent. For many, this could have been a change in principles but for me it was a wise decision. Today, they're successfully runnning Xamarin, they're growing strong and their products are a must have for every serious dev that want to port .NET-based apps/games to many platforms. And even MSFT recognizes it!

Btw, regarding XNA's Content Pipeline: it was freely available as long as you didn't want to develop games for the XBox 360 (and then, the Windows Phone). Otherwise, you had to pay an annual subscription for the Creators' Club and a registration fee for the Windows Phone (both now unified).

So to wrap up this third question, before using the word "disappointing" -given the fact that it won't be open source, please give the APE an opportunity to show off its key features and wait for the campaign at IndieGoGo. You won't be dissapointed!

Ok, this is it for this part of the series. Hope you all come back for the upcoming part 6 next week.

See ya next time,
~Pete

PS: btw, I recommend you to have a look at Xamarin's subsciption plans if you haven't yet!