Showing posts with label Frameworks. Show all posts
Showing posts with label Frameworks. Show all posts

Tuesday, December 31, 2013

REGARDING THE FUTURE OF C# ...

I'll be out of office for a few days and before I go let me post some words about the features I'd like to see implemented in upcoming versions of C# (btw, on my return I'll be posting again about The APE).

A couple of weeks ago on "This Week On Channel 9" series there was a reference to Mads Torgensen's presentation in London regarding "The Future of C#", announcing new features that could probably get implemented in C# 6.0.

So, in this post, let me explain some of the features that I hope they implement in C# in the short/middlle run:

1. LLVM-Like Compiler For Native Code


I talked about this many times, but I think it's a perfect time to mention it again.

So far, if you want to compile MSIL to native code at once, you can use a tool called NGen, which creates a native image of the code for the machine where compilation is being done. The problem with this tool is that its main purpose is to reduce startup times. Meaning? You won't get optimized bits for the whole code; just for the blocks first executed when the program starts.

Imho, we need more ... in this relatively new world of app marketplaces it'd be really handy to count on a model where you can deliver optimized native bits to the device/console/machine where the app would be downloaded and installed, don't you think?

Picture it like this: say you create an app/game with C# for the XBox One (using portable assemblies or not) and compile your source code to MSIL. Since the hardware of the console is the same in terms of main components (processor, memory, etc.) then why not compiling the whole MSIL code to native bits optimized for the Xbone console at once? (either on your side or on MSFT servers' one)

With a LLVM-Like compiler this could be achieved and extended to other platforms. But wait a minute! Isn't it what MSFT is doing for WP8? It sounds like it. But wait a minute, again! Isn't it something like the AOT compilation that can be found in the Mono Framework? If the latter gives optimized bits for whole assemblies per platform then it is!

In fact, many sources mentioned the so colled "Project N", which would be used to speed up Windows 8.1 apps. What is more, a few sources also mention that MSFT is working in a common compiler for C++/C#. I hope it also brings a more performing way to do interop stuff with C++.

True or not, this is a "must have" in order to end the C++/C# perf discussion!

2. "Single Instruction, Multiple Data" (SIMD)


In modern HW architecture, SIMD has become a standard when you want to boost performance in specific operations, in particular, ("vectorized") mathematic ones.

As a mather of fact, C++ counts with DirectXMath libraries (based on the formerly called XnaMath ones) which do implement SIMD, but unfortunately do not support C#.

Again, SIMD is already present in the Mono Framework for Math operations, so why not adding it to the .NET Framework once and for all?

I hope MSFT listen to us ...

3. Extension Properties


We have extension methods, so this is a corolary of it. Today, you can only implement getters (and setters) like this:

   public static string NameToDisplayGetter (this IPerson person)
   {
      ...
   }

Then, why not having something like this?

   public static string NameToDisplay: IPerson person
   {
      get { ... } // You could also add a setter, if needed.
   }

Of course that the syntax in the example above may vary, so I guess you get the idea here. There are several use cases where a feature like this could come handy, including MVVM or plain INotifyPropertyChanged ones.

4. Generic Enum Contraints


Generics is one of my favorite .NET features. There's lot of things that can be achieved through it but it has still room for improvement.

One of the things to improve are constraints. So far, when dealing with types of software elements, we have only two: class and struct. So, what about enums?

Currently, if you want to mimic an enum constraint you will have to write something like ...

   public void OperationX(TEnum myEnum)
      where TEnum: struct, IComparable, IFormattable, IConvertible ... and so on so forth.
   {
      … usual stuff …
   }

... and also, given that you are dealing with a subset of elements that approximate to an enum, you need to check whether and enum has been passed, generally throwing an exception if not:

    if (!typeof(TEnum).IsEnum)
   {
      throw new ArgumentException("The passed type is not an enum");
   }

Why not simplify it to something like this?

   public void OperationX(TEnum myEnum)
      where TEnum: enum
   {
      … usual stuff …
   }

Not only it makes sense, but also would simplify things a lot as well as open a wide range of handy operations and extension methods.

5. NumericValueType Base Class


.NET's CLR treats structs in a special way, even though they have a base class: ValueType.

I'll not be explaining here the characteristics of built-in primitives and structs; instead, I'll ask the following question: in the current version of C# can we achieve something like this ...?

   TNumeric Calculate(TNumeric number1, TNumeric number2, TNumeric number3)
     where TNumeric: struct
   {
       return number1 + number2 * number3;
   }

The answer: not without proper casts. So, a life changer for this type of situations would be to add a specification of ValueType that enjoys the benefits of structs and also supports basic math operations without any kind of wizardry: NumericValueType.

With that class and a new reserved word like, say, "numeric", "number" or "primitive", we could write generic operations and extension methods with a syntax as simplier as:

   TNumeric Calculate(TNumeric number1, TNumeric number2, TNumeric number3)
     where TNumeric: numeric
   {
       return number1 + number2 * number3;
   }

How about declaring new type of numbers? Easy ...

   public numeric Percentage
   {
      … usual stuff …
   }

... or ...

   public numeric Half
   {
      … usual stuff …
   }

No need to specify "struct" since "numeric" would be a value type that supports basic math operations (that we would need to implement when we declare the type, maybe, by overriding some base operations), and so in common scenarios there would be no need to cast values to do math.

6. Declaration of Static Operations On Interfaces


Put simply: having the possibility of declaring static operations when writing interfaces; like this:

   public interface IMyInterface
   {
      static void DoStaticOp();

      static bool IsThisTrue { get; }

      ... instance properties and operations ...
   }

This presents a challenge to both, polymorphic rules and abstract declarations, that is, at a "static" level. But as usual, with the proper rationale and care when modifying the language specs, it could be achieved. Maybe, many of you would be asking: "why bother?" but believe me when I say that I happened to meet situations where static operations on intefaces would have come really handy.

Well, this is it for today. What would you guys like to see implemented in C# 6 and beyond? Comments are welcome.

See ya when I get back,
~Pete

Friday, February 15, 2013

THE FATE OF XNA … NOW WHAT?

Lately there has been lots of speculation and comments on the Web regarding the fate of XNA as a result of these blog-posts.

Due to technical difficulties with my main system I am arriving late at the party; many articles and tweets are out now, but anyway, I will give my view on the subject.

For me, the phase-out process that MSFT has been carrying out silently for, what, a couple of years, a year and a half, a year, <include your estimate here>, is not precisely a surprise. In fact, I stopped working in all projects based on XNA tech during late 2010 because something was troubling me.

At that time, I was an XNA/DX MVP creating my game engine, replacing XNA’s Content Manager with my own version of it, developing a videogame, to mention just a few, but for some reason I was holding myself back before starting a game-dev business based on XNA tech.

The hunches -based on facts- that supported my decision back then,  in hindsight now prove me right on my wait. Of course it is important to note here that this worked for me; in other words, YMMV.

1. HUNCHES AND WARNING SIGNALS

Let’s see, in no particular order, these are the hunches that caught my attention:

  • Comuniqués started to slow down: these were a great read on the XNA Team blog, but suddenly, they started to fade out.
  • Our Community Manager moved to another division: we all remember her xxoo’s at the end of her messages and posts. That unexpected departure was the first warning signal to me.
  • XNA 4 was gradually presented as a “mature” product: or expressed in a different way, XNA was not likely to receive (major) updates. Maybe this one was very difficult to gather at that time, but for me it was the second warning signal.
  • Lack of strong support for XBLIG: how many times community members (and even MVPs) claimed for proper marketing, fast opening of new markets, and or even a decent location on the Dashboard? In practice, MSFT turned out to be reluctant, so third warning signal.
  • Lack of XBox Live services for XBLIG: in addition to the previous one, how many times community members claimed for Leaderboards, Achievements, DLC, and so on so forth? Do you guys at MSFT really expect that games with no global leaderboards survive the increasing demands from gamers?
  • Communication of Future Moves to MVPs: in the past, before entering a new dev phase, the Team used to involve XNA/DX MVPs on design decisions. Maybe for many readers this is not relevant, but from and MVP’s perspective that to some extent used to be involved in the roadmap, being asked “what do you guys think of …?” a few days before going public, is a warning signal. Fourth one, indeed.
  • The format of .xnb files was published to the world: this one might have been handy to me if published a couple of years earlier, but combined with the one below, gives -more than an indication- a confirmation that MSFT was silently phasing out XNA. Fifth warning signal.
  • Gradual relocation of all members of the XNA Team: when you saw one one of the most important programmers on the Team go to a different division on MSFT, and no one is relocated or hired to take its place for further development of XNA, (please be honest here) did you really think that everything was ok? Sixth warning signal. A major one, if you ask me.
  • Unattended suggestion on Connect: after the database clean-up the XNA Team did on its Connect’s page, suggestions were marked more and more as “Active”, “Postponed”, “By Design” and “Won’t be fixed”. Seventh warning signal.
  • DirectX SDK will not be updated any longer as such: let us clarify this point: the DirectX SDK was integrated into the Win8 SDK for the newest version of DX. What happened with the SDK for DX9.0c? Eighth warning signal.
  • No XNA 4 for Windows 8 RT: this is a technicality but, given that DirectX 9.0c does not get along with ARM processors, unless XNA gets a redesign based on DX 11.1, it gets pushed out of the picture for Surface (ARM-based) tablets. Since the XNA Team has been erased, unless a new official product comes unexpectedly out of the shadows for .NET, hoping for an official rope is kinda naive. Ninth warning signal. 
  • XNA does not support WinPhone8, or does it?: after all the worries, talks and efforts to provide safe environments, MSFT does radically change by allowing the execution of custom native code on the new Window Phone 8 devices. This sounded like heaven for XNA’ers until MSFT announced that XNA wouldn't add support for WinPhone8. Games created with XNA for WP7 still run on WP8 devices, but they will not be able to get advantage of unsafe operations for the device. Tenth warning signal.
  • XNA is not integrated into VS2012: as a corollary of the point above, XNA was not integrated into VS2012, what in turn means that if you need to use the content pipeline, you will need to install VS2010 side-by-side with VS2012. I don’t know, eleventh?
  • No MVP award for XNA/DirectX: I can understand the decision for XNA given that it has been and still is being phased out, but why must the award for DirectX be also doomed? Despite the fact that the SDK is now part of the Win8 SDK, imho it is still a separated kind of expertise that cannot be merged with other areas. Final warning signal = confirmation.

As a former XNA/DX MVP as well as an old timer using MSFT’s technology, let me say that lately it has been really difficult to recommend the use of XNA to create games professionally given the facts above.

What can you say to devs when they ask questions like: “Can I use XNA for Windows RT?”, “Will XNA be integrated into VS2012?” or “Will XNA support DX11?”? Ditto for the question below …

2. WILL THERE BE A NEW OFFICIAL SOLUTION FOR .NET?

It is very difficult to foresee what’s coming next in terms of .NET and game development given the difficulties one may find when trying to deduce what the heck TPTB at MSFT are currently thinking/doing.

But let us see, to update XNA (or replace it) MSFT may consider that …:

  • … there is a novelty around “Going Native” with C++11 inside MSFT itself.
  • … to support ARM processors, the new tech needs to be built on top of DX11 APIs (which supports “legacy” cards by only enabling the subset of allowed features for the card).
  • … XNA is neither a pure DX9-wrapper nor a game engine, making it difficult to justify its maintenance.
  • …  the dream of “develop once, deploy to the three screens” vanished given that not all the features supported on the PC were supported on the 360 and the WP7 platforms. Plus, the screens are changing: WP8, Surface, XBox.Next, ...
  • … due to the managed design of XNA, and in spite of some indie impressive efforts (like this one and also this one), XNA lacked middleware support of big fishes in the Industry.
  • … there was never a world/level editor. XNA is VS centric, so how can it compete with editor-centric solutions like Unity3D or UDK?
  • … last but not least, XBLIG failed as a business line an new lead marketplaces for indies emerge (Win8, WP8). Period.

So, to answer the original question, with C++ regaining position inside MSFT and being DX11.1 mandatory for latest platforms, why bother? Which leads us to the next question …

3. WHAT CAN “XNA’ers” DO NOW?

You feel disappointed. MSFT let you down (for some, again). You cannot find the exit from this nightmare. And you do not want to learn or get back to C++.

If that is your case, then, do not panic! Right now, there are many alternatives out there for you to consider, specially if you like or love C#:

1. SharpDX: created by Alex Mutel -as an alternative to SlimDX, this pure wrapper of DirectX (from DX 9.0c to DX 11.1, both included) has positioned as the lead solution for advanced user who want to program DX games on top of the lowest level available to C#.

Although this set of APIs is open source, it is consumed by many of the solutions that will be listed next. What is more, games for Win8 from MSFT Studios (through partners like Arkadium) have been developed using SharpDX (i.e.: minesweeper, solitaire, and mahjong).

Alex has been also developing a Toolkit to ease development of common tasks (sound familiar?), which for sure extends a bridge to those of us coming from XNA.

2. Monogame: the open source sibling of XNA. Fueled by SharpDX for all latest Windows-based platforms. Multiplatform not only for Windows, thanks to Mono.

With few-to-none modifications to the source code of your XNA creations, you can port your games to a wide variety of platforms.

This open source solution has recently reached its third stable version, adding many requested features, like 3D support.

Although it lacks a content pipeline replacement, which is currently under development, it can be used from VS 2010 and VS 2012.

Many well-known games have been created with Monogame (or adaptations of it) like: Bastion, Armed!, among others.

Last but not least, the community is growing strong around Monogame. As a matter of fact, if you like “the XNA-way”  then this is your perfect choice.

3. ANX: a competitor to Monogame. Its name, in case you did not notice, is XNA reversed. Recently, after a long wait, v0.5_beta has been published.

Not many games have been created with this solution yet and its community is rather small –in comparison with Monogame’s, but definitely its progress is worth following closely.

4. Paradox: I really do not know how Alex does to find some time left, but he is also developing a multiplatform game-dev solution for .NET with a data-driven editor!

Of course that the Window-targeted portion of Paradox is based on SharpDX, but the engine will also offer deployment to other platforms based on OpenGL.

No prices or release updates have been disclosed yet, but having read the features, watched images and demo videos, it is by far a very serious alternative to consider.

5. DeltaEngine: the lead dev of this multiplatform solution is the first XNA MVP that wrote a book about XNA.

Coding by using this solution resembles coding with XNA. It has its own multiplatform content pipeline which optimizes output per platform, among other tools. And games like Soulcraft show off the power of the solution.

You can check the pricing here.

6. Axiom: being a former user of this solution before the time of XNA, I am very pleased to see that the project has revived.

Axiom is now a multiplatform solution for .NET based on the popular OGRE graphic engine, which also consumes SharpDX for Windows targets.

Honestly, I do not know whether there are games created (and published) with this solution, but I hope there will eventually be sooner than later.

7. WaveEngine: Vicente Cartas (MVP for XNA/DX) has just let me know about this cross-platform engine, which will be released as a beta in less than a day ahead (thanks for the tip!).

Oriented towards the mobile-dev market, the engine is a result of a two-year effort of the Wave Engine team. Knowing past work of Vicente on JadEngine, I cannot wait to watch some cool demo videos here (like Bye Bye Brain).

Best of all, the engine is completely free, so it is with no doubt worth trying as soon as it gets released!

8. Unity3D: I cannot forget to mention Unity3D since it started almost at the same time that XNA did, however, adoption among devs grew exponentially on later years because of a combo of factors: a robust editor, multiplatform support, increasing number of appealing features, and a variety of well-known success stories among indies (for instance, ShadowGun).

Make no mistake here, the experience of using Unity3D is quite different from XNA’s: its editor-centric, coding -either in C#, Javascript or Boo- serves as scripts, sometimes you need to broadcast messages -as opposed to an OOP rationale, and last but not least, 2D programming is not straightforward (not even on the latest version; you need to buy one of the available plugins as a workaround).

You can check the pricing here.-

As you can see, even if no official solution will replace XNA, its spirit remains in many of its successors, all of which support latest DX11 HW.

So imho as a dev, there is no need to worry. Your knowledge is still valid for the above-mentioned alternatives.

4. OK, BUT WHAT ABOUT MSFT?

Well, imho it would be deemed as positive by XNA’ers (and indies in general) if MSFT …:

  • … does not try to impose C++ as the only language to develop quality games.
  • … develops a common compiler for C++/C#, for all supported platforms.
  • … implements SIMD ops for .NET (please vote for it).
  • … reduces differences for .NET among the latest “screens”.
  • … supports open efforts like SharpDX and Monogame (it seems it will).
  • … publishes as open source the source code of XNA that does not implies a security risk or bring any potential legal issues to the table (like say, the content pipeline).
  • … reduces barriers for indies (like say, the access to XBox Live services) for the upcoming XBox.Next so as to compete with other platforms like Ouya, iOS, Steam and so on so forth.
  • … and continues to support indies through initiatives like the Dream.Build.Play compo.

Personally, I do not care the language or solution a dev picks to develop a game provided it is the right language or solution for the project. In this sense, this “Going Native” campaign that some people at MSFT may seem to support by stressing perf differences among C++ and C# whenever they can, is imho unnecessary given the fact that there are many successful indie games out there developed with managed code.

Plus, as a former C++ dev, I do not want to get back to C++ because I feel really confortable with C#. If sometime in the future I had to go to a lower level language I would prefer “D”.

Thus, I hope MSFT creates a common compiler for C++/C# which in turn will help us turn the use of hybrid solutions into a common scenario for indies.

5. TO WRAP IT UP …

Without starting a nonsense discussion for a Pyrrhic Victory, imho the fate of XNA was predictable if you took a careful look at announcements from MSFT, whether you deemed them as facts or mere hunches.

But one thing remains strong for sure: XNA’s spirit.

Thanks to solutions like SharpDX and Monogame one can still talk about C# and XNA-based coding as a valid option for a game-dev business.

Cheers #becauseofxna!
~Pete

Monday, August 13, 2012

.NET MUST DIE … TO GO OR NOT TO GO NATIVE …

… is that the question? … not really.

From time to time I dare ask technical questions to experts in the fields of native+managed worlds so as to better understand the differences, performance-wise, between code originally written with a native language like C++ and “native images” of code written with a managed language like, as of today, C#.

Due to the novelty around the resurge of C++ due to revision 11, in one of my latest Q&A adventures, I dared ask Alexandre Mutel about eventual penalties –if any, of calling a wrapped operation in C# once the assembly gets compiled ahead of time with NGen (or its Mono equivalent, AOT compilation). Like, say, the following:

[DllImport("SomeDLL.dll")]
public static extern int SomeOperation(int h, string c, ref SomeStruct rStruct, uint type);

[For those of you that still don’t know him, Alexandre Mutel is the creator of, inter alia, SharpDX: “a free and active open-source project that is delivering a full-featured Managed DirectX API”, which is currently leveraging the DirectX-side of projects like Monogame and ANX, among others; being imvho the perfect choice for those of us who don’t want to go back to C++ and once embraced the old ManagedDX solution that then was called off by MSFT in order to give birth to XNA a few months later].

I won’t dare claim that Alexandre posted this impressive article because of my email question (or my prior request of DirectMath support in SharpDX due to SIMD), but I must admit that it vanishes any doubt I might have had in the past in that regard and leads me to concur that .NET must die.

In his article, Alexandre mentions an interesting detail, or fact if you’d mind, when speaking of a managed language:

… the performance level is indeed below a well written C++ application …

… and also that:

… the meaning of the “native” word has slightly shifted to be strongly and implicitly coupled with the word “performance”.

He also references two articles about the real benefits of better Jittering:

And a finding on Channel9 forums, indicating that MSFT is hiring to create a unique compiler to be used on both, C++ and C#.

So, after reading all of the above-mentioned material, if you have reached a point in you programming life where you do search for performance over safeness, is still the real question whether you should go native?

Imvho, the question has then turned into “how”.

The fact that a native solution gives the performance level you are looking for, does not mean that you must only use the C++ language. Even with the additions found in C++11 (a few of them that could have arguably stemmed from managed languages), it still has a cumbersome and unfriendly syntax.

Or what is more, does neither mean that you won’t be able to use a language like C# to get an optimized native application for whichever platform you need (even the Web).

If in order to get native bits we should always stick to “low-level” languages, then we had never moved from both Assembler or even binary notation towards C and all of its offspring. The evolution of hardware and compilers, made eventually C++ a better choice than ASM for performance-oriented apps, given that, marginally over time, the penalty curve was decreasing to an extent that it became irrelevant for native programmers.

Therefore, what if you can get rid of Jittering (being replaced by a fully performance-oriented LLVM compiler) and still have an efficient GC for cases when manual memory (de)allocations are not needed?

Much as I hate Objective-C, due to its ugly syntax, its newest versions for the MAC (and lately, the iOS) platforms offer LLVM native bits with GC.

And what about a more friendly language like “D”, instead? Latest evidence leads me to believe that C-based languages are moving towards its direction.

My point is that going native does not necessarily mean that all  the memory management of your program must avoid a garbage collector for efficiency. Nor that you have to use languages with cumbersome or unfriendly syntax to get the most of efficiency. It depends mainly on how compilers and memory management tech evolve side by side to get the most out of the target platform, how unsafe you can go with a given language where and when needed, and how much penalty-free you can call native operations from external binaries.

For instance, even though its limitations, you can do some unsafe programming with C# (fixed, stackalloc, etc.). The problem is that this feature is not allowed for all platforms (like WinPhone7), and in some platforms the set of operations is limited (i.e.: stackalloc is not available on the Compact Framework for the XBox 360).

And again, the D language seems to provide a friendly syntax (close to C#) while offering a power similar to C++.

Personally, I feel quite comfortable with C#; let’s be real here for a moment: I won’t be creating a Halo-like game any time soon, but I don’t want to go back to C++, say, to consume DirectX11 APIs. Having said that, I really hope C# evolves in a way that arguments from “native” programmers become trivial and the industry embrace it (as once embraced C/C++ to minimize the use of ASM). Evidence shows C# will evolve in this field, but as usual, time will tell …

To wrap it up, does going native imply that .NET should die so that a syntax-friendly language like C# would survive? …

Short answer: yes (or at least, as we know it today). Long answer: read all of the links provided in this post and see it for your self ;)

My two cents,
~Pete

Thursday, October 14, 2010

REPLACING THE CONTENT MANAGER API: PRELUDE

As I briefly mentioned in one of my previous posts, for a period of time (close to 2 years, now) I have been developing my own API to manage content as a replacement for the one delivered as part of the XNA Framework.

Why? At first I was trying to circumvent the limitations of the latter related to its inability to properly handle the Dispose operation. And while doing so, I realized -when I checked the source code of the original with “Reflector”- that there was room for improvements (with a redesign).

After almost two years of development as part of my own XNA-based videogame engine (being the latter still in endless development), I decided to redesign my content manager as an independent module so that it can be used with XNA without the need to plug my engine (neither in whole nor in part).

I will not release the source code of my API, so in the next installments in this series of articles I will be discussing publicly what my API has to offer, so that anyone can read it, focusing on The-What rather than on The-How; thus, I will be mainly presenting features and results (I do indeed know, yes, “what a bummer!”).

So, what’s coming next? On part 1 (and most likely part 2 also) I will be introducing my API, explaining differences with the built-in one, and after that I will start to talk about obtained results (yes! With numbers, screenshots and such) for the different platforms I tested.

‘till next time,
~Pete

> Link to Spanish version.

Friday, September 10, 2010

GETTING A REFERENCE TO A METHOD-CALLING OBJECT IN C#

Ok, after taking a month off from any blogging activity, I guess is time to catch up and blog again. And I believe there is no better way of doing so than to write an article explaining a way to get a reference to the object that calls a specific method.

For the last two years or so, I have been working hard on a new Content Manager replacement API for the XNA Framework to use on my own “never-ending” game engine (I will blog about this later on). In some early stages of development I found a way to solve this referencing burden that have been creating headaches to many devs, globally (being me, one of them, for a long time).

Although I must admit that at some point on the development cycle of my API I decided to abandon this workaround, after reading one of the articles mentioned by Shawn here (guess which one), I believe it’s a great opportunity to talk about it.

A little history … before founding this solution I tried everything: from attempting to use –with no luck- the info held in StackFrame instances and even delegates to finally “quit” and use a naive approach as passing the caller as a parameter in the signature of an operation. But, then came .NET Framework 3 and extension methods to the rescue!

Please bear with me that the example presented in this article is just a simple one to show how to use it, but believe my words when I say that it can be extended to complex scenarios.

Having said that … if you want to know which object is calling a certain method of a class like (which may or may not be sealed):

  public sealed class MyCalledClass
  {
    internal bool JustReturnTrue(int myParameter)
    {
      return true;
    }
  }

Being the caller, say, a specification of the following class:

  public abstract class MyCallingClass
  {
    public void Print()
    {
      Console.WriteLine("I am the object which is just about to call
         the monitored operation named 'JustReturnTrue' …"
);
    }
  }

Then all you have to do is to implement the following generic extension method for the types you want (in my example, “MyCallingClass”):

  public static class MyExtensionMethod
  {
    public static bool ExecuteOperation<T>
      (this T mycallingClass, int myParameter)
      where T: MyCallingClass
    {
      mycallingClass.Print(); // Replace it with your own stuff.
 
      return new MyCalledClass().JustReturnTrue(myParameter);
    }
  }

The trick here is to design and split assemblies and namespaces in a way that all instances of “MyCallingClass” cannot directly execute the “JustReturnTrue” operation (I leave that task as an exercise to the reader).

But there is one catch to watch closely, though. By doing this you are actually adding one more call (you know that), which is not generally a problem on Windows, but for the XBox 360 and all devices using the Compact Framework, it could turn out to be expensive if used lots of times on intensive or heavy tasks/loops.

But if you really need it when speed is not an issue or for operations where -for example- you need to set owners to something “automagically” behind the scenes and later assess whether an owner calls a restricted operation before executing it, then there you have it!

Just use it wisely …
~Pete

> Link to Spanish version.

Friday, January 29, 2010

WATCHING THE SUN BURN - PART 3

A little while ago I wrote a couple of tech-oriented posts regarding the rendering engine -for XNA GS- known as “Sunburn”.

This time, I’ll skip the technical side of things to share some great news about the latest version of it, which has been just released: it fully supports lighting and shadowing for Avatars!!!

Is it a soft shadow what I’m seeing projected in there? Cool … gotta download v1.3.1 right now!

Enjoy!
~Pete

> Link to Spanish version.

Wednesday, October 14, 2009

NEW XBOX CONSOLE FOR 2012 ?

Lately, there’s been some buzz regarding the near future of MSFT’s main gaming console.

Words like “XBox 720”, “X-Engine” and “Project Phoenix” have been used. But what do they mean, exactly?

According to this article, “Project Phoenix” is the name internally given by MSFT for what some people outside the Company have called “XBox 720”.

Now, a more recent article states that the new console would hit the markets on 2012, with a new gfx card from ATI so as to maintain backward compatibility as well as boost performance.

Regarding the latter (performance), a new “X” engine would have been recently released, not only including a new set of tools but also “a whole new way to develop for the system” -as this third article says).

I don’t know whether all these “news” are true or not, but it’s an interesting read, though.

Plus, imvho, there’s a lot of “360” yet to enjoy, specially when Project Natal gets out; don’t you think?

Stay tuned,
~Pete

 

> Link to Spanish version.

Thursday, September 24, 2009

WATCHING THE SUN BURN - PART 2

On part 1 of the series, I had introduced the lighting and rendering engine named “Sunburn”, from Synapse Gaming.

Well, version 1.2.4 of the engine is out, bringing a boost in performance for the forward rendering technique.

Hereunder you will find my latest results for the reflection/refraction demo (which uses the above-mentioned technique):

1) PC platform:

  • Min: 37 fps -> watching the three orbs from almost the roof (close to the windows on top),
  • Max: 60 fps -> watching one of the windows on top, and
  • Average: 43 fps -> in general (sometimes a bit more on the corridors, not facing the orbs).

2) XBox 360 console:

  • Min: 28 fps -> same case as the PC platform,
  • Max: 54 fps -> ditto,
  • Average: 32 fps -> ditto.

It’s important to notice that, in this demo, the scene is rendered three times per frame: once for the reflect image, once for the refract image, and once for the final output. Why? In order to allow a dynamic behavior. Meaning? Instead of faking the effect with “static” snapshots, what you see is updated and calculated on real-time; thus, moving objects are caught by the process, refracted and reflected.

Again, for both tests I drew the final image on a high resolution back-buffer of 1280x720, but this time I also run the tests on the XBox 360 (both with a varying time step).

Given those results, I cannot wait to see a version with deferred rendering on the XBox 360!

Now, how difficult is to use Sunburn? Let’s just find out, shall we?

I. The Code.

Let’s continue using the “Reflection/Refraction” demo to demonstrate how to use Sunburn. Please download the demo project from SG’s download section before reading on.

When you open the project with Visual Studio 2008, you will find a usual initial structure: the game class and the content folder.

In what follows, I’ll be explaining what’s included in those, based on the example, but only for the code relative to the engine itself:

i. Using Statements

There’s plenty of namespaces to refer, but for this example I will only concentrate my comments on two of them:

   1: using SynapseGaming.LightingSystem.Effects.Forward;
   2: ...
   3: using SynapseGaming.LightingSystem.Rendering.Forward;

In case you decide to use Deferred Rendering, you’ll have to change “Forward” for “Deferred” here (and then, as you will notice based on the syntax checking done by VS08, you’ll also have to modify some parts of the code, accordingly).

ii. Fields

The additional fields added to the game class, in comparison to a standard one, can be separated in three categories: a) the lighting system, b) scene members, and c) the technique members.

a) basically, you must declare here both the lighting and rendering managers, plus the helpers that will provide specific data to the system, like environmental information and lighting plus detail preferences.

   1: LightingSystemManager lightingSystemManager;
   2: RenderManager renderManager;
   3: ...
   4: SceneState sceneState;
   5: SceneEnvironment environment;
   6: LightingSystemPreferences preferences;
   7: ...
   8: DetailPreference renderQuality = DetailPreference.Off;

b) along with the meshes that conforms the scenegraph, you must declare all the lights that will be used in the scene plus its respective “rigs”; now, what’s a rig? It’s a container that will store, organize and help sharing the scene lights in the scene.

   1: ...
   2: LightRig lightRig;
   3: PointLight keyLight;
   4: PointLight fillLight;
   5: DirectionalLight sunLight;
   6: ...

c) mainly, you must declare the forward-rendering effects along with the render target helpers (the latter give support for reflection, refraction and the usual render-to-texture draw calls).

   1: SasEffect orbEffect;
   2: SasEffect waterEffect;
   3: ...
   4: RenderTargetHelper refractionTarget;
   5: RenderTargetHelper waterReflectionTarget;

iii. Constructor

Moving onto the initialization members, you must instantiate most of the above-mentioned fields and set the lighting and detail preferences based on the target platform.

   1: // Load the user preferences (example - not required).
   2: preferences = new LightingSystemPreferences();
   3: #if !XBOX
   4: if (File.Exists(userPreferencesFile))
   5:     preferences.LoadFromFile(userPreferencesFile);
   6: else
   7: #endif
   8: {
   9:     preferences.EffectDetail = DetailPreference.High;
  10:     preferences.MaxAnisotropy = 1;
  11:     preferences.PostProcessingDetail = DetailPreference.High;
  12:     preferences.ShadowDetail = DetailPreference.Low;
  13:     preferences.ShadowQuality = 1.0f;
  14:     preferences.TextureQuality = DetailPreference.High;
  15:     preferences.TextureSampling = SamplingPreference.Anisotropic;
  16: }

It is interesting to notice, in case of the PC platform, that the example provides means of selecting the level of detail based on the manufacturer of the gfx card and model number.

   1: // Pick the best performance options based on hardware.
   2:  VideoHardwareHelper hardware = new VideoHardwareHelper();
   3:  
   4:  if (hardware.Manufacturer == VideoHardwareHelper.VideoManufacturer.Nvidia)
   5:  {
   6:      if (hardware.ModelNumber >= 8800  hardware.ModelNumber < 1000)
   7:          renderQuality = DetailPreference.High;
   8:      else if (hardware.ModelNumber >= 7800)
   9:          renderQuality = DetailPreference.Medium;
  10:      else if (hardware.ModelNumber >= 6800)
  11:          renderQuality = DetailPreference.Low;
  12:  }
  13:  else if (hardware.Manufacturer == VideoHardwareHelper.VideoManufacturer.Ati)
  14:  {
  15:      if (hardware.ModelNumber >= 3800)
  16:          renderQuality = DetailPreference.High;
  17:      else if (hardware.ModelNumber >= 3400)
  18:          renderQuality = DetailPreference.Medium;
  19:      else if (hardware.ModelNumber >= 2600)
  20:          renderQuality = DetailPreference.Low;
  21:  }
  22:  
  23:  switch (renderQuality)
  24:  {
  25:      case DetailPreference.High:
  26:          reflectionRefractionTargetSize = 512;
  27:          reflectionRefractionTargetMultiSampleType = MultiSampleType.TwoSamples;
  28:          graphics.PreferMultiSampling = true;
  29:          break;
  30:      case DetailPreference.Medium:
  31:          reflectionRefractionTargetSize = 256;
  32:          reflectionRefractionTargetMultiSampleType = MultiSampleType.TwoSamples;
  33:          graphics.PreferMultiSampling = true;
  34:          break;
  35:      case DetailPreference.Low:
  36:          reflectionRefractionTargetSize = 128;
  37:          reflectionRefractionTargetMultiSampleType = MultiSampleType.TwoSamples;
  38:          graphics.PreferMultiSampling = false;
  39:          break;
  40:      case DetailPreference.Off:
  41:          reflectionRefractionTargetSize = 128;
  42:          reflectionRefractionTargetMultiSampleType = MultiSampleType.None;
  43:          graphics.PreferMultiSampling = false;
  44:          break;
  45:  }

Important: since the engine’s render manager uses by default a “page” size of 2048 pixels for shadow mapping, in case of targeting the Xbox 360, this value must be reduced to 1024 pixels so that the page completely fits within the 360’s EDRAM, thus avoiding predicated tiling.

   1: ...
   2: renderManager.ShadowManager.PageSize = 1024;
   3: ...

iv. Loading Content:

First, you must create the render target helpers and apply the preferences, in this case, to render the reflection and refraction effects, to the helper that corresponds.

   1: // Create reflection / refraction targets.  Note the refraction target is using
   2: // the "Standard" type to avoid clipping as the map is used by all refractive
   3: // objects (not just one with a specific surface plane).  See the comments at the
   4: // top of the page for details on why this is done.
   5:  
   6: refractionTarget = new RenderTargetHelper(graphics, RenderTargetHelper.TargetType.Standard,
   7:     reflectionRefractionTargetSize, reflectionRefractionTargetSize, 1, SurfaceFormat.Color,
   8:     reflectionRefractionTargetMultiSampleType, 0, RenderTargetUsage.PlatformContents);
   9:  
  10: waterReflectionTarget = new RenderTargetHelper(graphics, RenderTargetHelper.TargetType.Reflection,
  11:     reflectionRefractionTargetSize, reflectionRefractionTargetSize, 1, SurfaceFormat.Color,
  12:     reflectionRefractionTargetMultiSampleType, 0, RenderTargetUsage.PlatformContents);
  13:  
  14:  
  15: // Setup the refraction and reflection preferences.  These preferences are
  16: // set to a lower quality than the main scene's rendering to increase performance
  17: // and because reflection / refraction distortions from the normal map will
  18: // hide the quality.
  19:  
  20: refractionPreferences = new LightingSystemPreferences();
  21: refractionPreferences.EffectDetail = DetailPreference.Low;
  22: refractionPreferences.MaxAnisotropy = 0;
  23: refractionPreferences.PostProcessingDetail = DetailPreference.Low;
  24: refractionPreferences.ShadowDetail = DetailPreference.Low;
  25: refractionPreferences.ShadowQuality = 0.25f;
  26: refractionPreferences.TextureSampling = SamplingPreference.Trilinear;
  27:  
  28: refractionTarget.ApplyPreferences(refractionPreferences);
  29: waterReflectionTarget.ApplyPreferences(refractionPreferences);

Then you must read from disk the values that specify how to set the effects in order to use them as materials.

   1: // Load the custom materials / effects used by the additional reflection / refraction
   2: // rendering pass.  These materials both use the same FX file with different material options.
   3:  
   4: orbEffect = Content.Load<SasEffect>("Effects/Orb");
   5: waterEffect = Content.Load<SasEffect>("Effects/Water");

The content of the raw “.mat” files is as follows:

//-----------------------------------------------
// Synapse Gaming - SunBurn Lighting System
// Exported from the SunBurn material editor
//-----------------------------------------------
 
Locale: en-US
 
AffectsRenderStates: False
BlendColor: 0.6 0.6 0.4
BlendColorAmount: 0
BumpAmount: 0.017
BumpTexture: ""
DoubleSided: False
EffectFile: "ReflectionRefraction.fx"
Invariant: False
ReflectAmount: 0.5
ReflectTexture: ""
RefractTexture: ""
ShadowGenerationTechnique: ""
Technique: "Technique1"
Tint: 0.8627451 0.9254902 0.9647059
Transparency: 0.5
TransparencyMapParameterName: "ReflectTexture"
TransparencyMode: None

Next, you load the structure of the Light Rig, which declares each light and its respective settings and traverse the structure to instantiate and set each light.

   1: // LightRigs contain many lights and light groups.
   2: lightRig = Content.Load<LightRig>("Lights/Lights");
   3:  
   4: // Need to find the lights for later performance adjustments.
   5: foreach (ILightGroup group in lightRig.LightGroups)
   6: {
   7:     foreach (ILight light in group.Lights)
   8:     {
   9:         if (light is PointLight)
  10:         {
  11:             PointLight pointlight = light as PointLight;
  12:  
  13:             if (pointlight.Name == "FillLight")
  14:                 fillLight = pointlight;
  15:             else if (pointlight.Name == "KeyLight")
  16:                 keyLight = pointlight;
  17:         }
  18:         else if (light is DirectionalLight)
  19:         {
  20:             DirectionalLight dirlight = light as DirectionalLight;
  21:  
  22:             if (dirlight.Name == "Sun")
  23:                 sunLight = dirlight;
  24:         }
  25:     }
  26: }

The content of the raw “.rig” files is as follows:

<root>
  <LightRig>
    <LightGroups>
      <GroupList>
        <item_0>
          <LightGroup>
            <Name>EnvLighting</Name>
            <ShadowType>SceneLifeSpanObjects</ShadowType>
            <Position>
              <Vector3>
                <X>0</X>
                <Y>0</Y>
                <Z>0</Z>
              </Vector3>
            </Position>
            <Radius>0</Radius>
            <ShadowQuality>0.5</ShadowQuality>
            <ShadowPrimaryBias>1</ShadowPrimaryBias>
            <ShadowSecondaryBias>0.2</ShadowSecondaryBias>
            <ShadowPerSurfaceLOD>True</ShadowPerSurfaceLOD>
            <ShadowGroup>False</ShadowGroup>
            <Lights>
              <LightList>
                <item_0>
                  <AmbientLight>
                    <Name>Ambient Lighting</Name>
                    <Enabled>True</Enabled>
                    <DiffuseColor>
                      <Vector3>
                        <X>1</X>
                        <Y>0.6431373</Y>
                        <Z>0.04313726</Z>
                      </Vector3>
                    </DiffuseColor>
                    <Intensity>0.3</Intensity>
                  </AmbientLight>
                </item_0>
                <item_1>
                  <DirectionalLight>
                    <Name>Sun</Name>
                    <Enabled>True</Enabled>
                    <DiffuseColor>
                      <Vector3>
                        <X>1</X>
                        <Y>0.972549</Y>
                        <Z>0.772549</Z>
                      </Vector3>
                    </DiffuseColor>
                    <Intensity>2.6</Intensity>
                    <ShadowType>AllObjects</ShadowType>
                    <Direction>
                      <Vector3>
                        <X>-0.5012565</X>
                        <Y>-0.8552828</Y>
                        <Z>-0.1312759</Z>
                      </Vector3>
                    </Direction>
                    <ShadowQuality>2</ShadowQuality>
                    <ShadowPrimaryBias>1.3</ShadowPrimaryBias>
                    <ShadowSecondaryBias>0.01</ShadowSecondaryBias>
                    <ShadowPerSurfaceLOD>True</ShadowPerSurfaceLOD>
                  </DirectionalLight>
                </item_1>
              </LightList>
            </Lights>
          </LightGroup>
        </item_0>
        <item_1>
          <LightGroup>
            <Name>OrbLighting</Name>
            <ShadowType>SceneLifeSpanObjects</ShadowType>
            <Position>
              <Vector3>
                <X>0</X>
                <Y>0</Y>
                <Z>0</Z>
              </Vector3>
            </Position>
            <Radius>0</Radius>
            <ShadowQuality>0.5</ShadowQuality>
            <ShadowPrimaryBias>1</ShadowPrimaryBias>
            <ShadowSecondaryBias>0.2</ShadowSecondaryBias>
            <ShadowPerSurfaceLOD>True</ShadowPerSurfaceLOD>
            <ShadowGroup>False</ShadowGroup>
            <Lights>
              <LightList>
                <item_0>
                  <PointLight>
                    <Name>FillLight</Name>
                    <Enabled>True</Enabled>
                    <DiffuseColor>
                      <Vector3>
                        <X>0.3803922</X>
                        <Y>0.8313726</Y>
                        <Z>0.9411765</Z>
                      </Vector3>
                    </DiffuseColor>
                    <Intensity>3.8</Intensity>
                    <FillLight>True</FillLight>
                    <FalloffStrength>0</FalloffStrength>
                    <ShadowType>AllObjects</ShadowType>
                    <Position>
                      <Vector3>
                        <X>25.83315</X>
                        <Y>10.99056</Y>
                        <Z>-75.42744</Z>
                      </Vector3>
                    </Position>
                    <Radius>46</Radius>
                    <ShadowQuality>0</ShadowQuality>
                    <ShadowPrimaryBias>1</ShadowPrimaryBias>
                    <ShadowSecondaryBias>0.2</ShadowSecondaryBias>
                    <ShadowPerSurfaceLOD>True</ShadowPerSurfaceLOD>
                  </PointLight>
                </item_0>
                <item_1>
                  <PointLight>
                    <Name>KeyLight</Name>
                    <Enabled>True</Enabled>
                    <DiffuseColor>
                      <Vector3>
                        <X>0.4627451</X>
                        <Y>0.8980392</Y>
                        <Z>1</Z>
                      </Vector3>
                    </DiffuseColor>
                    <Intensity>0.6</Intensity>
                    <FillLight>False</FillLight>
                    <FalloffStrength>0</FalloffStrength>
                    <ShadowType>AllObjects</ShadowType>
                    <Position>
                      <Vector3>
                        <X>25.83315</X>
                        <Y>10.99056</Y>
                        <Z>-75.42744</Z>
                      </Vector3>
                    </Position>
                    <Radius>110</Radius>
                    <ShadowQuality>0.25</ShadowQuality>
                    <ShadowPrimaryBias>1</ShadowPrimaryBias>
                    <ShadowSecondaryBias>0.2</ShadowSecondaryBias>
                    <ShadowPerSurfaceLOD>True</ShadowPerSurfaceLOD>
                  </PointLight>
                </item_1>
              </LightList>
            </Lights>
          </LightGroup>
        </item_1>
      </GroupList>
    </LightGroups>
  </LightRig>
</root>

Finally, you load the data that configures the environment.

   1: // Load the scene settings.
   2: environment = Content.Load<SceneEnvironment>("Environment/Environment");

Being the raw content of the “.env” files the following:

<root>
  <SceneEnvironment>
    <VisibleDistance>500</VisibleDistance>
    <FogEnabled>True</FogEnabled>
    <FogColor>
      <Vector3>
        <X>0</X>
        <Y>0</Y>
        <Z>0</Z>
      </Vector3>
    </FogColor>
    <FogStartDistance>70</FogStartDistance>
    <FogEndDistance>200</FogEndDistance>
    <ShadowFadeStartDistance>500</ShadowFadeStartDistance>
    <ShadowFadeEndDistance>5000</ShadowFadeEndDistance>
    <ShadowCasterDistance>5000</ShadowCasterDistance>
    <BloomAmount>2</BloomAmount>
    <BloomThreshold>0.7</BloomThreshold>
    <ExposureAmount>1</ExposureAmount>
    <DynamicRangeTransitionMaxScale>4.5</DynamicRangeTransitionMaxScale>
    <DynamicRangeTransitionMinScale>0.5</DynamicRangeTransitionMinScale>
    <DynamicRangeTransitionTime>0.5</DynamicRangeTransitionTime>
  </SceneEnvironment>
</root>

The remaining tasks are the usual ones with the exception of what relates to the local field of the type “EffectBatchHelper”.

   1: ...
   2: EffectBatchHelper batcher = new EffectBatchHelper();
   3: batcher.CollapseEffects(scene);
   4: ...

What’s its use? It helps to create batches of effects, by analyzing the effects used by each of the models in the scene. Or in other words, it collapses similar “materials” in order to optimize draw calls.

v. Updating:

The only special thing to notice here has to do with the water effect, since on every update call it calculates the normal texture to set to achieve the animation effect on the water’s surface.

   1: // Apply the current water animation "frame" to the water effects.
   2: for (int p = 0; p < water.MeshParts.Count; p++)
   3: {
   4:     ModelMeshPart part = water.MeshParts[p];
   5:     if (part.Effect is LightingEffect)
   6:         (part.Effect as LightingEffect).NormalMapTexture = waternormalmapframe;
   7: }

Other than that, there’s nothing else to comment since all handlers will be automatically updated by the game instance it-self.

vi. Drawing:

As said on the beginning of this article, the game renders the refraction texture, then the reflection one and finally the main output.

I order to achieve this goal, you first must set up the state of the scene.

   1: // Setup the scene state.
   2: sceneState.BeginFrameRendering(view, projection, gameTime, environment);
   3: ...

Then, for each “special” map (in this case, in order, the refraction and reflection ones), select the render target, what lights are active and what objects they affect, and draw the scene.

   1: //-------------------------------------------
   2: // Generate the refraction map.
   3:  
   4: // Adjust the reflection / refraction lighting based on performance.
   5: if (renderQuality == DetailPreference.High)
   6: {
   7:     keyLight.Enabled = true;
   8:     keyLight.ShadowType = ShadowType.AllObjects;
   9:     fillLight.Enabled = true;
  10:     fillLight.ShadowType = ShadowType.AllObjects;
  11:     sunLight.Enabled = true;
  12: }
  13: else
  14: {
  15:     keyLight.Enabled = false;
  16:     keyLight.ShadowType = ShadowType.None;
  17:     fillLight.Enabled = true;
  18:     fillLight.ShadowType = ShadowType.None;
  19:     sunLight.Enabled = true;
  20: }
  21:  
  22: // Add the light rig.
  23: renderManager.LightManager.SubmitLightRig(lightRig, ObjectLifeSpan.Frame);
  24:  
  25: // Begin generating the refraction map.
  26: refractionTarget.BeginFrameRendering(sceneState);
  27:  
  28: // Clear the depth buffer then render.
  29: graphics.GraphicsDevice.Clear(ClearOptions.DepthBuffer, Color.Gray, 1.0f, 0);
  30: RenderTarget(refractionTarget);
  31: refractionTarget.EndFrameRendering();

   1: //-------------------------------------------
   2: // Generate the water reflection map.
   3:  
   4: // Adjust the reflection / refraction lighting based on performance.
   5: if (renderQuality != DetailPreference.High)
   6:     sunLight.Enabled = false;
   7:  
   8: // Add the light rig.
   9: renderManager.LightManager.SubmitLightRig(lightRig, ObjectLifeSpan.Frame);
  10:  
  11: // The water reflection map includes the orbs so add them as dynamic frame objects.
  12: foreach (Orb orb in orbs)
  13:     renderManager.SubmitRenderableObject(orb.model, orb.mesh, orb.currentMeshToObject, sceneWorld, false, ObjectLifeSpan.Frame);
  14:  
  15: // Begin generating the water reflection map.
  16: waterReflectionTarget.BeginFrameRendering(sceneState, waterWorldPlane);
  17:  
  18: // Clear the depth buffer then render.
  19: graphics.GraphicsDevice.Clear(ClearOptions.DepthBuffer, Color.Gray, 1.0f, 0);
  20: RenderTarget(waterReflectionTarget);
  21: waterReflectionTarget.EndFrameRendering();

Lastly, the scene is rendered for the third time combining both previous maps and thus, getting the final output for the scene.

   1: //-------------------------------------------
   2: // Render the main scene.
   3:  
   4: // Adjust the lighting based on performance.
   5: if (renderQuality == DetailPreference.High  renderQuality == DetailPreference.Medium)
   6: {
   7:     keyLight.Enabled = true;
   8:     keyLight.ShadowType = ShadowType.AllObjects;
   9:     fillLight.Enabled = true;
  10:     fillLight.ShadowType = ShadowType.AllObjects;
  11:     sunLight.Enabled = true;
  12: }
  13: else
  14: {
  15:     keyLight.Enabled = renderQuality != DetailPreference.Off;
  16:     keyLight.ShadowType = ShadowType.AllObjects;
  17:     fillLight.Enabled = true;
  18:     fillLight.ShadowType = ShadowType.None;
  19:     sunLight.Enabled = true;
  20: }
  21:  
  22: // Add the light rig.
  23: renderManager.LightManager.SubmitLightRig(lightRig, ObjectLifeSpan.Frame);
  24:  
  25: // The main rendering pass includes all objects so add the water and orbs as dynamic frame objects.
  26: foreach (Orb orb in orbs)
  27:     renderManager.SubmitRenderableObject(orb.model, orb.mesh, orb.currentMeshToObject, sceneWorld, false, ObjectLifeSpan.Frame);
  28: renderManager.SubmitRenderableObject(scene, water, waterMeshToObject, sceneWorld, false, ObjectLifeSpan.Frame);
  29:  
  30:  
  31: // Apply main scene preferences (higher quality than reflection / refraction).
  32: renderManager.ApplyPreferences(preferences);
  33:  
  34: // Begin main frame rendering.
  35: editor.BeginFrameRendering(sceneState);
  36: renderManager.BeginFrameRendering(sceneState);
  37:  
  38: // Clear the depth buffer then render.
  39: graphics.GraphicsDevice.Clear(ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
  40: renderManager.Render();
  41:  
  42:  
  43: // Setup and render reflective / refractive pass for water and orbs using additive blending.
  44: GraphicsDevice.RenderState.AlphaBlendEnable = true;
  45: GraphicsDevice.RenderState.SourceBlend = Blend.One;
  46: GraphicsDevice.RenderState.DestinationBlend = Blend.One;
  47:  
  48: foreach (Orb orb in orbs)
  49:     RenderMesh(orb.mesh, orb.currentMeshToObject * sceneWorld, sceneState, orbEffect, null, refractionTarget.GetTexture());
  50:  
  51: GraphicsDevice.RenderState.CullMode = CullMode.None;
  52:  
  53: RenderMesh(water, waterMeshToObject * sceneWorld, sceneState, waterEffect,
  54:     waterReflectionTarget.GetTexture(), refractionTarget.GetTexture());
  55:  
  56: GraphicsDevice.RenderState.AlphaBlendEnable = false;
  57: GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;
  58:  
  59:  
  60: // Done rendering main frame.
  61: renderManager.EndFrameRendering();
  62: editor.EndFrameRendering();

Each rendering step is started and ended in a similar way than the “SpriteBatch” class does, so there are no surprises here.

Finally, we’re done rendering the scene frame.

   1: sceneState.EndFrameRendering();
   2: ...

vii. Overall:

All of the provided demo projects, include a detail explanation on what is going on in each example, what greatly help you understand the rationale behind the engine’s processes.

II. The Editor.

Now, you may be wondering: “how can I speed up the modeling process? If I had to manually write material files and so on, I’d be cumbersome!”.

You’re right, but, fortunately the engine comes with a handy editor that spares you from that task.

How? Simple, by using the game class provided by Synapse Gaming, you can change materials, and position lights, among other things.

Note: in order to use the editor you must add one more field to your code: the LightingSystemEditor field.

The following video says it all:

For more videos showing off the Sunburn engine please visit this site:

http://www.youtube.com/user/bobthecbuilder

Well, this is it. Now it’s your turn to do your own tests and share your experience with the Sunburn engine!

Watch this space,
~Pete

 

> Link to Spanish version.