This shows you the differences between two versions of the page.
csharp:aspnet:webapi [2013/06/23 14:39] rtavassoli [Polymorphism] |
csharp:aspnet:webapi [2013/06/23 14:54] (current) rtavassoli [Polymorphism] |
||
---|---|---|---|
Line 14: | Line 14: | ||
Web API can deal with POCOs. That's that, nothing more needed to be said. | Web API can deal with POCOs. That's that, nothing more needed to be said. | ||
==== Polymorphism ==== | ==== Polymorphism ==== | ||
- | WCF requires //[KnownType]// attributes. Web API uses the Json serializer. The WebApiConfig can be tweaked unter the App_Start folder to deal with polymorphism by including the type name for either everything, or just for objects: | + | WCF requires //[KnownType]// attributes. Web API uses the Json serializer. The WebApiConfig can be tweaked unter the App_Start folder to deal with polymorphism by including the type name for either everything, or just for objects. Simply set the json.SerializerSettings.TypeNameHandling property for the serializer: |
<code csharp> | <code csharp> | ||
public static class WebApiConfig | public static class WebApiConfig | ||
Line 39: | Line 39: | ||
This is especially useful for an aggregate command handler, which handles all commands sent to it for a certain aggregate. The commands can be of different types and the list of possible types is extensible. So all the client has to do is to include the Type, eg. { $type: "ProM.RescheduleServiceAction, ProM", id: ... } and send the command off to //the// command handler for all commands. | This is especially useful for an aggregate command handler, which handles all commands sent to it for a certain aggregate. The commands can be of different types and the list of possible types is extensible. So all the client has to do is to include the Type, eg. { $type: "ProM.RescheduleServiceAction, ProM", id: ... } and send the command off to //the// command handler for all commands. | ||
> | > | ||
- | The same effect can be obtained by having separate handlers for different commands. However, I((not everyone, especially not the people believing in distributed transactions)) believe in batching commands, and guaranteeing to the user that either all will pass or fail. And when sticking to SQL Server 2008 or higher, no distributed transactions are needed for this, even if the commands go to different databases inside the same datebase server. | + | The same effect can be obtained by having separate handlers for different commands. However, I((not everyone, especially not the people rejecting distributed transactions)) believe in batching commands, and guaranteeing to the user that either all will pass or fail. And when sticking to SQL Server 2008 or higher, no distributed transactions are needed for this, even if the commands go to different databases inside the same datebase server. |
+ | > | ||
+ | Without batching, you will either have to build composite commands or workflow engines. The problem with composite commands is that the same //smaller// command can now be part of more than one command, and system maintenance will suffer. The problem with workflow engines((or //SAGAs//)) is that they are complicated, and should not have to be used in simple cases where a user wants to change a name and the bithday together, and if one of those fails, having to implement a complicated workflow to compensate for that. A simple transaction handling two separate commands is so much easier((KISS, n'est ce pas?)). |