Windows Store Apps for Dynamics CRM – Best Practices as of 5/1/2013

Disclosure: I work for Microsoft. The views in this post are my own and not those of Microsoft.

The Problem: Authentication

When we’re discussing problems with writing a Windows Store app for CRM, the primary thing we need to talk about is authentication. After we discuss the problem, we’ll talk through 2 solutions for authenticating and writing code that communicates with CRM from a store app.

The Solutions: Sample SDK and Unsupported, Unsanctioned Goodness (for CRM Online on O365 Only)

  1. First we’ll walk through using the CRM Product Team’s sample SDK for Windows Store Apps. You may have used this already (I hope you have), but stay tuned, because there are some tricks that we can use to improve the developer experience, and we’ll identify those.
  2. Then, we’ll talk about using the Organization Service Odata endpoint directly from our Windows 8 Store App. Why, you might ask, am I talking about option 1 first if I have a solution that will let you call the Odata service directly? Because, young Jedi, this approach is non-sanctioned, has no path for official support, and if it breaks your apps, you’re on your own. But it’s still awesome. And I’m doing it last because I like suspense.

Why is there an authentication problem?

So why is there a problem authenticating with CRM from Windows Store apps?

When we’re building apps for CRM on other platforms, where we have access to the full .NET library, we use the Microsoft.Xrm.Sdk.dll library to connect to CRM. This library does quite a bit for us. Perhaps most importantly, it manages authentication for us. It has all the code in there that navigates through the steps the CRM web service exposes in compliance with the WS-Trust protocol. However, Microsoft.Xrm.Sdk.dll is dependent upon .NET 4.0 and WIF (Windows Identity Foundation). When we’re building Windows Store Apps, we have a lighter .NET profile that doesn’t include these resources.

There are 2 endpoints for the Organization Web Service that we use to access CRM data.

  • The Odata endpoint is not available to external callers (for now – read on for solution 2 :) ).
  • The SOAP endpoint uses the WS-Trust protocol to authenticate callers. WS-Trust itself doesn’t require .NET 4.0 or WIF, and for that matter, neither does the CRM SOAP endpoint. It just so happens that the code in Microsoft.Xrm.Sdk.dll uses those two resources for its own WS-Trust authentication. Without this library, we have to write our own WS-Trust implementation using the slimmed-down .NET profile available to us (hard), or we can use the sample SDK provided by the product team (easy).

Sample Windows Store App Project: Take Notes on Contacts

This sample project gives you a list of Contacts and lets you create a Note for each one:

Untitled picture

Untitled picture

Solution 1: The Sample SDK

Benefits:

  • Authentication = Easy
  • You’re building Win8 apps for CRM right away

Drawbacks:

  • No LINQ
  • No await/async – have to use EAP
  • No early binding
  • No support for Windows authentication – only IFD and CRM Online

Is the 4th bullet really a drawback? Not really, but I thought I’d note it here. If you really wanted to build an app that only can be used inside your company’s firewalls, you could write your own Windows Authentication mechanism.

Of the 3 drawbacks, the last 2 are actually easily rectified – but I won’t detail them here, because Marc Schweigert has already created a great video that walks through the steps you can take to get early bound entities and await/async methods:

http://blogs.msdn.com/b/devkeydet/archive/2012/12/01/getting-started-with-the-crm-2011-windows-store-app-sample.aspx

Unfortunately, there’s no LINQ support; you’re stuck with FetchXML. Put it in a static class like I do in my sample app, and use the LINQPad solution Marc recommends if you want to.

Solution 2 (Unsupported): Call the Odata endpoint from the outside

  • To install the Nuget package, run commandlet: Install-Package FedId.WinRT.Project.Scaffolding -Version 1.6.5.2
  • My Sample Project Code: https://github.com/andisimo/CrmFedId (see the instructions on how to get the code working below)

Yes, this is groundbreaking. You’ve never been able to do this before, and you aren’t misunderstanding me. There is a way now to call the CRM Odata endpoint from a Windows Store app. It’s unsupported – and I’ll explain what that means below. This only works for CRM Online tenants running on the Office 365 cloud, not to LiveId tenants or to on-premise installations. You’ll also want to keep in mind that this approach has all of the limitations of the Odata endpoint (it only does CRUD operations over entities), so there are some scenarios where you may need to access the SOAP endpoint via the Sample SDK anyway. However, for most Windows Store Apps, CRUD over entities will probably be all you’ll ever need.

You can get the package by running the above commandlet from the Package Manager Console in Visual Studio with your Windows Store app project open.

The benefits of this approach are that you can use the modern code conventions we’ve been used to using since CRM 2011 was released. This is like writing a web resource that runs inside of CRM, because we’re using the same Odata endpoint! The authentication part is relatively easy. Let me correct myself – it is easy for US because a very smart programmer has done a substantial amount of work to build the assets that make this possible.

The drawbacks are that this approach is not supported. What does this mean? As CRM devs, when we hear “not supported”, we tend to think we’re talking about a forbidden no-no. That’s not the case with what I’m about to discuss. Remember, we’re building an app outside of CRM. How we authenticate from that app is not going to place our CRM deployment in an unsupported state. What unsupported means, in this instance, is that there is NO GUARANTEE that what we build using this approach won’t break in the future if something changes in CRM. This is not aligned with the CRM product group’s strategy and you should have no expectation that what I’m about to show you will merge with or become part of the officially recommended way of doing things in the future.

Now, why am I sharing it with you if I have to give such a stern disclaimer up front? Because what I’m about to show you is exciting, and at the very least, it’s a very compelling thing to test and try out. When you’ve carefully weighed your decision, some of you might decide to use it. Just know that any minor change in the CRM api could potentially cause this to stop working, and while that may be unlikely, it’s definitely possible. We have a major release coming out later this year, and you’ll want to keep your eye on that.

How Solution 2 Works

Solution 1 (The sample SDK from the CRM team) uses WS-Trust to authenticate via the SOAP endpoint, passing a request security token in the SOAP header. This is an established, solid approach to authentication, but there are problems. Since this all happens via the SOAP header, we are constrained to use the SOAP service, which introduces the developer productivity limitations we discussed earlier.

Authentication to the CRM endpoints is fundamentally nothing more than a requirement that a caller meet a certain criteria. The criteria the SOAP endpoint looks for is a valid token in the SOAP header. The criteria the OData endpoint looks for is a valid, encrypted cookie being passed in the request that identifies the caller.

The FedId WinRT Project Scaffolding collects the user’s credentials. It then walks through the steps a browser client goes through, inserting the credentials at the necessary places, to mimic the steps a browser takes in WS-Federation passive authentication. It collects the various tokens that are returned during the several required steps, in order to finally get a token that can be used to access the CRM Odata endpoint.

How to Use the FedId WinRT Project Scaffolding Package

  1. Download and install the WCF Data Services Tools for Windows Store Apps:
    1. http://www.microsoft.com/en-us/download/details.aspx?id=30714
  2. Download the CSDL from CRM > Settings > Customizations > Developer Resources
  3. Create an empty windows store app
  4. Install the Nuget package from package manager console:
    Install-Package FedId.WinRT.Project.Scaffolding -Version 1.6.5.2
  5. Add a Service Reference and point it to the location of the csdl file you downloaded. Name it OdataReference. It will have a context type to match your org name (like OdataReference.OrgNameContext). Giving your service reference a namespace other than “OdataReference” will require you to find and change a few using statements that are currently “using CrmFedId.OdataReference”.

What’s in the Sample Code

  • There are only a few main components at work here:
    • I’ve got a MainPage.xaml for signin, and a BasicPage1 and BasicPage2.xaml. Upon authentication, MainPage navigates to BasicPage1, which shows a gridview of Contacts. When you click (or tap – come on, this is Windows 8) on a Contact, BasicPage1 navigates to BasicPage2 and shows you a textbox that allows you to create a note for that contact.
    • I also have a ContactsNotesViewModel class that has a few properties to store the data I get from CRM so the xaml pages (views) can bind to it, and some methods to get that data. The context for the CRM tenant is also instantiated and stored in the ViewModel, as well as a few events that help me keep track of the authentication and data responses. I have a static property of type ContactsNotesViewModel defined in App.xaml.cs and instantiated in the OnLaunched method of that class.
    • Security.IdpCredentialsFlyout.xaml: this is the definition for the flyout. This “Security” folder is added when you install the Nuget package. You can look at the codebehind for this page to see the event handler for when the user pushes the “Sign In” button (called SignIn), how it handles the credentials, and ultimately calls the FedId.Instance.SignInAsync method and gets the token in the response.
    • A class called FedId that manages security tokens, settings, things like that. It has a static property of its own type (FedId) called Instance, that you will use to access the actual tokens you get from CRM from elsewhere in your project code.
    • You don’t need it, but I have an extension method, courtesy of Marc Schweigert again, that gives me an await/async-friendly ExecuteAsync<T> method with a generic type parameter. It’s definitely better to have it than not.
    • And, finally, your Service Reference, which gives you the entity context for your CRM organization. Your context is at “OdataReference.[OrgName]context”.

How to Get the Sample Code Working

  1. Follow the steps in the section above, “How to Use the FedId WinRT Project Scaffolding Package“, with the EXCEPTION OF steps 3 and 4, creating a new Windows Store App project and installing the Nuget package. You already have your project in the sample code, and it already has the Nuget package installed.
  2. Once you have your Service Reference added, open your Find dialogue (ctrl + f) and search the ContactsNotesViewModel for the text “psecdemoContext”. You should find 3 instances of it – in the type declaration for a private field, a public property, and in the HaveToken() method that sets the public property. Change the type of these items to match your own [orgname]Context. Orgname = the name of your CRM organization.
  3.  Use your find dialogue again to search for “psecdemo.” (psecdemo with a period). This is to find 2 URLs where you’ll need to insert your own organization name. One is in the ContactsNotesViewModel, and the other is in the MainPage.xaml.cs file.
  4. Build!

Code Flow

The main page loads (in the LoadState method). The first thing that happens there is that we subscribe to the following event:

FedId.Instance.ShowSettingsFlyout

The lengthy lambda expression that follows mostly makes sure that when the event is fired, the flyout where the user enters his/her credentials is displayed properly.

Next, we subscribe to the TokenReceived and ContactsLoaded events on the ContactsNotesViewModel. We’ll discuss these events in a moment when they actually get fired, in order to keep our walkthrough of the code chronologically sequenced.

So finally we get to the bottom of the LoadState method, where we find the first code that actually proactively does something:

if (!FedId.Instance.IsSecurityTokenValid())
{
   var result = await FedId.Instance.SignInAsync
      (https://psecdemo.crm.dynamics.com/);
} 
else
{
   App.ContactsNotesVM.HaveToken();
}

This code is checking the FedId.Instance property to see if there is a valid security token there. Sometimes there will be – for example, if you’ve already authenticated and your computer has stored the authentication cookie. If you haven’t authenticated, or if your cookie is expired, then there won’t be a valid security token, and the code will invoke the SignInAsync method. You can’t see this method other than its signature in the metadata viewer, but we know that somewhere in this method or in other code called by this method, the ShowSettingsFlyout event is triggered (the one we subscribed to at the very top of our LoadState method on MainPage.xaml.cs).

The user then enters credentials and clicks the “Sign In” button, triggering the

IdpCredentialsFlyout.SignIn

method. When that method gets a token back successfully (i.e. if result == true in the following code), we call the ContactsNotesViewModel.HaveToken() method:

if (result == true)
{
   //The HaveToken method fires the TokenReceived event on our viewmodel 
   App.ContactsNotesVM.HaveToken();

As the comment there says, this HaveToken() method turns around and fires the TokenReceived event, after initializing the ViewModel’s context. There may or may not be a code smell here, calling a method on object1 to essentially trigger an event on object1 from object2, but it seemed like a solid way to get this sequence working properly in my sample.

public void HaveToken()
{
   Ctx = new psecdemoContext(new Uri(

https://psecdemo.crm.dynamics.com/XRMServices/2011/OrganizationData.svc/));

   TokenReceived();
}

Now, let’s go back to MainPage.xaml.cs. In our LoadState method, you should remember that we subscribed to the HaveToken() method. When this event is fired, we call the LoadContactsAsync method on our ViewModel:

App.ContactsNotesVM.TokenReceived += () =>
{
   App.ContactsNotesVM.LoadContactsAsync();
};

The LoadContactsAsync method is where we have the code that actually interacts with the Odata service.

public async void LoadContactsAsync()
{
if (FedId.Instance.IsSecurityTokenValid() == false)
{    
   throw new InvalidOperationException(@"ContactsNotexViewModel.LoadContactsAsync cannot retrieve         
   contacts when the FedIde.Instance.IsSecurityTokenValid property is false. This property is set = true when the         
   security token is correctly received from the FedId.Instance.SignInAsync method call."); }            

   var query = from c in Ctx.ContactSet    
   select c;Ctx.SendingRequest += (sender, args) => 
      args.RequestHeaders[HttpRequestHeader.Cookie] = 
      FedId.Instance.SecurityToken.SessionCookieHeader;

   var results = await ((DataServiceQuery<Contact>)query).ExecuteAsync();

   foreach (Contact c in results)
   {    
      Contacts.Add(c);
   }

   this.IsDataLoaded = true;
   Loaded();
}

The last thing this method does is trigger the ContactsNotesViewModel.Loaded event. We subscribed to this in our MainPage.LoadState method:

App.ContactsNotesVM.Loaded += async () =>
{
   await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
   {
      this.Frame.Navigate(typeof(BasicPage1));
   });
};

As you can see, once we know the contacts have been loaded into our ViewModel, we are free to navigate to BasicPage1, where the databinding between the page and the ViewModel takes over and the contacts are displayed just like you saw in the picture at the top of the post.

Instead of using an event to tell us when the contacts were loaded, a better solution would be to make the ContactsNotesViewModel.Contacts property an ObservableCollection and implement the INotifyPropertyChanged interface on the ViewModel, which would allow us to navigate to BasicPage1 without worrying about whether we would get there before the contacts were loaded. The viewmodel would simply update the view with the data as it arrived.

But you’ll notice that there this isn’t the only less-than-optimal choice I’ve made for writing this Windows Store app. My goal was to show you two ways to access the CRM web services from the app, and I hope you’ve found at least one of them valuable!

New Role and Disclosure

As of Dec 10, 2012, I work at Microsoft as a CRM Technical Architect focused on Public Sector. The last 4 months have been really exciting for me. Because I’ve spent my career in the Microsoft partner channel, I have always had the ambition of working for Microsoft “one day”.

Even as someone who was very favorably inclined toward the company, however, I had begun to be negatively influenced by the negative media bias constantly blowing against Microsoft. When I had the opportunity to apply and be considered for the position, I wondered about the stories I had read in Vanity Fair last Summer. Even though Vanity Fair smells like perfume and has pictures of women in skirts and face paint all over it, I was naïve enough to think they might have something relevant to say about the tech industry.

In my 4 months at Microsoft, I’ve found little grounds for Vanity Fair’s negative report on the work environment here. The mag’s portrayal of stack ranking and that practice’s creation of a negative and adversarial work environment is entirely inaccurate. The reality that I’ve experienced is a very positive workplace. The people here are extremely bright, driven, and competent, and the majority of them are excited about Microsoft. They are behind the company and are proud of its past history and continuing vision of changing the world through products that reach almost everyone living in it.

So, as a point of disclosure, future posts about technology should be read in the light of the fact that I now work at Microsoft. However, the views expressed here are my own and NOT those of Microsoft.

Simply Simplify

After an off-the-cuff remark I made the other day in my post on the CRM security model, I ran across this video post by James Governor. He’s driving at something bigger than an argument for general simplicity, but it was the comments he made on simple software design that caught my attention. He says the following:

Software had always thrived on complexity, on more configuration, on making things more and more complicated so that everything could be custom, so that the user could do things in a million different ways. But the problem is when you do that, they will do things in a million different ways, and that makes support much more expensive. It creates all these challenges … it turns out, the way to make things better is to make things simple, and to have an opinion.

I like Governor’s concept of having an opinion. In using that term, he is referring to stubborn people who build products in a simple way despite criticism that the product won’t do everything users might want it to do. For example, he tells the story of Ruby on Rails, created by a very opinionated person by the name of David Heinemeier Hansson. Hansson chose standardization over configurability, and thereby chose simplicity over complexity. Developers who use Rails have fewer choices. It turned out that options weren’t what developers were looking for – they responded very well to the simplicity of Rails, and it has become one of the most popular web development environments available.

On the flipside, he quickly demos a tablet running Windows 8 Pro. The interface is simple and lovely. App design focuses on content not chrome, and the interaction is intuitive. His praise turns sour, however, when he lands on the old Windows desktop. While Microsoft did a great job building a simple interface for Windows 8, they were not opinionated enough in telling people “this is how we’re going to interact with Windows from now on.” The result, in Governor’s analysis, is a disturbing dual-personality that jars the user experience and cheapens the otherwise tremendous accomplishment embodied in the new metro interface.

Governor isn’t the only one to level this criticism at Windows 8 – it’s actually a common complaint, and I think it’s a valid one, with qualifications, which I’ll discuss in a moment. The old face of Windows is still in Windows 8 because Microsoft thinks many people want more options, that many people don’t want to change, and yet they know they have to change for Windows to be relevant in the changing technology landscape. So they developed an operating system that tries to please everyone. I believe the size of Microsoft’s customer base makes it very difficult for the company to make the types of opinionated decisions that Governor favors. But personally, I’d like to see them take a bold move like that just to see what happens.

Taking a step back, though, one could argue that there’s another reason why the Windows 8 interface had to be the way it is. Today’s tablet technology simply doesn’t have the ability to run all the applications that people use. Windows 8 is the new version of Windows, period – not a new auxiliary category in the Windows family. It’s one operating system for tablets and PCs. This means a user could realistically use one device for all of their computing activities. Isn’t that a step toward simplification? One step toward reducing the current load of 3 personal devices to 1? (I would like to see a tablet that can, like the Windows 8 tablet, get carried around and also docked, allowing both mobile and desktop computing. But on top of that, my ideal device could be easily strapped to your back, your hip, or against your ribs under your arm, FBI gun-style, and it would have a little removable earpiece for phone calls).

Governor’s critique seems to limitits analysis to the experience of the single device, but I think it’s also valid to consider the consolidation of devices as a great step toward simplicity; and in that respect, I think Microsoft did exert a strong opinion. Whether or not that opinion gains the respect of the market remains to be seen.

Governor’s Opinionated Design concept isn’t the only approach to simplicity. A great book I read recently called Nail it then Scale it, by Nathan Furr and Paul Ahlstrom, advises entrepreneurs to build simple products with the following anecdote:

A number of recent books, such as The Paradox of Choice, summarize decades of social psychological research showing that even though it may appear that we crave more choices, in fact, we respond to simplicity. For example, in one well-known experiment at a Bay Area grocery store, two different jam exhibits were tested. In the first exhibit, six types of jam were put on display for customers to sample, and the resulting sales were robust (30% of customers bought a jam). In a second exhibit, customers were given far more choices; twenty-four different types of premium jam were put on display in the true spirit of customer choice. Shockingly, sales dropped over seven-fold to a meager 4% sales rate. In other words, despite the fact that modern capitalism has led to an explosion in choices, customers respond to simplicity more than complexity (58 – 59).

Furr and Ahlstrom also relate the story of two entrepreneurs who built an application prototype that had 20 major features. When researching the product’s feasibility with actual potential customers, they found that customers would only be willing to pay about $200 per month for the product. They also found, however, that most of the customers placed the most value on 3-4 of those 20 features. The entrepreneurs rebuilt the prototype with only 4 features, and did their analysis with real potential customers again. This time, they found that the customers would be willing to pay $1,000 per month for the product. By reducing the clutter and mess caused by a slew of unneeded features, these entrepreneurs saw the perceived value of their product skyrocket.

Contrary to Governor’s concept of opinionated design, however, Furr and Ahlstrom’s premise is that no company should ever build a product before having an assurance from customers that they will buy the product. In essence, while Governor advocates telling the customer what they want, Furr and Ahlstrom advocate getting more customer feedback, and letting that feedback drive the product development.

There are nuances in each theory that soften the conflict between them. For example, Furr and Ahlstrom are very clear that although customers know what their problems and desires are, customers do not know the solution to those problems and desires. Quoting Henry Ford, who said “If I had asked customers what they wanted, they would have said faster horses”, Furr and Ahlstrom argue that the product creator/designer’s innovative contribution is the essential core of successful product development – something Governor would clearly agree with. However, unlike Governor, Furr and Ahlstrom argue that early, focused, and continued attention to customer feedback is necessary for the productive direction of an innovator’s creative energy.

Likewise, I have to assume that Governor’s opinionated design concept doesn’t completely reject the idea of letting the market drive product design – it simply argues that the product creators must take a stand on principles of simplicity, which might require ignoring much of that feedback to focus on the core messages it conveys.

Update

I’ve been plowing through more of Governor’s material and, as I come to a better understanding of his thought on this subject, I believe his opinionated design actually requires ignoring the conventional wisdom that presses in horizontally from non-customers (like analysts, competitors, etc.), rather than vertically from customers. The problem is, with the massive change in computing that we’re experiencing, customers often don’t know how to ask for the right things; and, according to Governor, vendors aren’t there yet either. It’s well worth a read (and a view – his posts often have videos also): http://redmonk.com/jgovernor/.

When the CRM Security Model Isn’t Enough

I myself am not an expert in organizational behavior or in structuring companies. So keep that in mind as you read my next statement, where I invoke the wisdom of these people anyway, without asking, based on my assumptions about things. I’ll let you work out the validity of my argument on your own.

I think large organizations that have complex chains of command where people (usually in sales) report to more than one lateral individual in the company have done a bad thing. I’m confident organizational experts would agree with me (experts.Invoke( ;-) ) ). Why would a salesperson span 2 regions and report to 2 bosses? How could this person possibly fulfill his accountability to both people?

I’ve worked with more than one large company with complex reporting structures full of exceptions. Some people may look at such a complex structure as a badge of honor that seems to say “here is something important enough to be complicated.”

But making CRM’s security model work with something like this is difficult, and it will cost extra money up front and over time. Imagine the money spent on this, and how much better it would be to simply … simplify. Alas, while B2C technology is focused on meeting the users’ demands for simplicity, B2B technology is forced to cater to demands for more complexity. It’s one of my greatest professional sorrows. Because deep down, I’m a minimalist. I believe clarity and simplicity are within our grasp anywhere, if we try. But unfortunately, in the case we’re describing today, that’s not your job.

The Problem in clearer terms

Let me give you the details in simplicity. A customer’s sales organization is split into regions. Each region has a leader that should be able to see all of the stuff for the people who work for him. Can we do this in Dynamics CRM?

YES. If only it were always this simple. You can create a business unit for each of the different regions. There can even be sub-regions if you need. The users go in the different business units, and they are given the proper security roles. The leaders of the regions go in the business unit for their region, and they have a security role which gives them privileges to interact with every record in that business unit (and possibly also with all of that business unit’s child business units).

This is the situation the CRM security model was designed for. And, to my point above, it should happen more often.

The Problem in slightly muddier terms

Now, in the situations I’ve been in where things got difficult, there were a lot of confusing exceptions to the rules I laid out above. But if I distill them into the simple principles, the main problem was this:

We have users who belong to more than one region.

Utterly crushed. Why? Because it probably didn’t have to be that way. In my experience, this situation can start with a well-intentioned effort to re-pick teams to address the market more effectively, and ended with politics (like salespeople refusing to let go of accounts) that confused the effort.

But you’re probably not here to rewrite an org-chart. So you think – TEAMS!! We’ll just put them on a team in the second business unit, so they can essentially exist in both places!

Don’t let me interrupt your high fiving and back patting. Carry on – you’re having fun. But as soon as you’re done, consider this – you still have the requirement for the leader of each region to see the stuff that belongs to users in his region. He WILL NOT see the stuff owned by the users who are part of his region by virtue of membership in a team. (Why not? See here). Remember, we decided above that the leader’s user record would reside in his business unit, and he would have privileges to all the records owned by people in his business unit.

There are enough possibilities here that you can work out the details yourself for this situation. It probably won’t be clean and pretty, but it can work. For example, teams can own records. Maybe the accounts in this CRM system are owned by the regional leader guy, or maybe they have a relationship with a piece of data that indicates the region. Since the accounts are the core of all the other data, you can create a rule through a plugin or workflow that looks at the related account record for created records (like an opportunity). If the opportunity is for an account that is in a different business unit than the salesperson who’s creating it, it gets assigned to the team for that business unit.

Or, you could modify the leader’s security role and give him access to see more, and then filter it down to what he wants to see using views. There are several ways of addressing this that could work. As a last resort, you could try something I’m going to describe in my next section, but it would be so anti-climactic for me to mention it here.

The Problem in dark obscurity

Now, imagine as you work through solving what you thought was a big problem, another little issue comes up. This one is mentioned off the cuff, almost as something taken for granted. Slid sideways into casual, non-charged conversation. You can save things right? And this program works with a mouse and keyboard? I assume it has colors? Oh good. Super. … And of course we’ll need to set up some security based on the type of sale we’re looking at.

Mushroom cloud. Let’s change the subject. Do you like politics? Let’s talk about those …

But it’s impossible to change the subject. Did you really think they didn’t see the mushroom cloud go off in your head? You know that Security in CRM is role-based. The lynchpin of this role-based security is record ownership. A record has an owner, and that owner resides in a business unit. The security role operates in the context of those 2 very simple pieces of data, and nothing else.

In other words, you can’t flag an opportunity as a “Corporate Sale” or a “Classified Sale” or as anything else and expect security to operate on those values. If this requirement didn’t exist in tandem with the last one I described above, you might be able to create business units for your regions as we discussed at first, and then create child business units underneath the regions, one for normal sales, and one for “Classified” sales. But this would assume that salespeople only work on one or the other, and that’s probably not true either. After all, you’ve had to come this far, why should something go your way now?

At this point, you might have to look at using the SDK. You could potentially write a plugin that executes on the pre-stage of the Retrieve and RetrieveMultiple platform methods. This would allow you to check out who the user is and whether he’s somebody who should be able to see “Classified” sales. If the user is NOT that sort of user, you can modify the request to filter out those records, so views and forms won’t load the “Classified” data.

The downside to this is the obvious concern of maintaining security in code that the administrator can’t see or interact with. It’s also true that the security behavior of the system with this plugin in place would seem to directly contradict, in some cases, the privileges described in the security roles. But, if we’re smart and set it up as transparently as possible, it could be a very good solution. For example, maybe we could create an empty security role that has the name “View Classified Sales”, and have our plugin check to see if the user has that role to determine whether it needs to modify the query for the Retrieve methods. That way, an administrator looking at a user’s security roles could at least see that something was happening outside of the ordinary security construct.

Out of time! But let me mention one more thing … if you’re interested, there’s one loophole that I’m aware of in CRM security. If everything lined up right for you, you might be able to use this loophole to address this last requirement we’ve been discussing. Read about it here.

Lessons from Agile Development with an Off-shore Team

My main role at my company is to deliver subject-matter expertise on Dynamics CRM (which includes solution design for companies building products on top of CRM) and to manage the delivery of the resulting product by our development team in Kiev, Ukraine. I enjoy both of these roles, but the actual project management piece, which is a part of the “managing delivery” role, I could honestly do without.

So when I saw how our team worked as I started my employment here, I was concerned. As might be expected from an off-shored professional services organization, the approach to work was a bit scattered at first. Projects would come in, the least-busy people were assigned to those projects, we tried to maximize the time each person spent working on billable work, etc. This meant that the team never had time to gel (as there was no consistency in the makeup of the teams from project to project). It also meant that each person was typically working on several projects at once. Projects were difficult to close out, and I don’t think it was a great environment for developers who want a more planned approach to their careers, the competencies they build, and the type of work they do.

So I was really happy when we had 2 projects start that were both multi-month development projects. The larger one has been going for 8.5 months and has 6 developers/testers working on it. Most of that team has been very consistent throughout the project. So at the beginning, I decided that this would be the perfect opportunity to introduce agile to the team. I had several particular goals:

  1. Avoid gantt charts [shudder]. I hate them. And to try to manage an 8-month project that started with a 230-page technical spec with waterfall and a gantt chart and a critical path … I’d die first. The technical spec was written by a 3rd-party architect, so this created a sticky business triangle, but outside of that, we tried to use this architect as our on-site customer.
  2. Overcome the wall I felt between myself and the devs. It seemed like they were afraid to speak their minds to me, to contribute their ideas to our projects, to tell me when they had a better idea than I did … I assumed this was cultural. I know they didn’t agree with me all the time, because previously, from time to time, they had followed my way of doing things until they kind of exploded with frustration at doing it my way when they had a better idea. I wanted a more open dev environment where we could work together as equal contributors, even if I am technically above them in the chain of reporting.
  3. Instill agile skills in the team. This was for their own benefit as much as it was for mine. I consider agile experience a plus for any developer’s resume. This is especially true if he has TRULY experienced agile (not just adopted a few mechanisms like sprints and scrums).

So … how did it go? The culture gradually changed. I was gratified to see the technical lead on our team start to take over my role as the person who led the planning meetings, started writing the stories, etc. His confidence in this role grew as the project went on. We faced some challenges, such as going way over budget, but as I investigated the causes (and asked the direct question multiple times about whether this was a result of using the unfamiliar agile approach), we honestly determined that the project had just been poorly estimated at the beginning. Estimated at the beginning, you might ask? Yes … we couldn’t do everything the right way according to agile. I’ll discuss this below.

What Went Well

  1. We FINALLY got the art of agile estimation down. We started by looking directly at the spec and trying to group the sections from there into our iterations. This worked marginally well until the sections got too big to do in one iteration. We started re-writing sections from the spec as user stories that the developers could understand more readily and give the gut-feeling estimates we needed to generate our story-points. As the weeks went on, the estimates began to coalesce. At the beginning, there would generally be one cluster of estimates with 2-4 days difference between the highest and the lowest, and then one outlier from the “pessimistic guy” that was usually 5 days higher than everyone else. We averaged the estimates together to get our story points. While we were fairly consistent in the number of story points we accomplished from iteration to iteration, it could have been better. At the end, the estimates were much closer together, usually all within 1 to 1.5 days of each other.
  2. Planning improved. The major challenge with this project was the complexity of the technical spec. Did I say it was 230 pages long? And completely unreadable by normal humans. After starting with 2-week iterations and really struggling to consistently deliver the planned functionality due to problems that arose mid-iteration, we pared the iterations down to 1-week. We had our demo and planning every Tuesday, so the developers really had only 4 development days plus a few hours in an iteration. This dramatically improved our consistency. Keeping the planning cycles short left much less room for us to get bogged down in complexity we didn’t understand when we planned.
  3. Code quality was great. We re-factored constantly, we tested the functionality for each iteration diligently so we could release it at the end of the iteration (even though the customer never took the product out of the demo into production). I honestly think the code quality in this project was the best I’ve seen from our team.
  4. Cooperation was awesome. Since this project had most of our team working on it, I feel like it’s actually transformed our team. Our leader (who was appointed the technical development lead shortly after the project started) has been able to develop a great collaborative atmosphere with the other team members, and that’s not something that would have happened had they all just been working on 3 projects at once like they were before. They spoke to each other, walked to each-others workstations, and generally had a much higher level of interaction than on other projects.

What didn’t go well

There’s only so much you can do in your first attempt to implement agile. The core of agile is the cooperation between business people and developers, and this is where we fell short, unfortunately. So, while I’m hesitant to say we were fully practicing agile with these shortcomings, I think we made great strides, and we’ll learn from these things moving forward.

  1. Collaborative workspace – while our developers have this among themselves, there was simply no way for us to put our “onsite customer” (the 3rd-party architect who wrote the spec) in the same room as the developers in Kiev for more than 3 days at the beginning of the project. At first, he was present via Skype at our daily standup meetings. Then, he was only present at our demos. He was always available for questions, and was very helpful, but it wasn’t the same as having him in the room. When could you ever have the customer in the room all the time for a professional services project, especially an off-shored one? I would say never. Unfortunately, this is a challenge that may not have a perfect solution. The best thing to do is to try to keep the customer as involved as possible, especially at the points where input is needed and decisions are made. Demand his availability, and, if possible, insist that he not require questions to be scheduled.
  2. Full implementation of my favorite concepts- there were some XP concepts that I just couldn’t get the team to try – including test-driven development and pair programming. I’ve spoken to developers who have successfully used these practices and will attest to their effectiveness, but my developers in Kiev were not ready for them. Well, we change gradually, right? Maybe sometime in the near future. But it’s important to have the right financial configuration on a project to introduce radically different development practices, and this probably wasn’t it.
  3. Welcoming change – this is another pillar of agile, and unfortunately, we fell short here. Why? We were working on a fixed-fee project. At first this was good for our adoption of agile, because instead of being accountable for every hour we spent, we could focus more on the result. However, as it became apparent the estimates we had used to price the project were way off, the financial burdens of the project required us (me, mostly) to start really challenging any input our onsite customer gave us that changed the scope of the project. Enter the business triangle I mentioned earlier. We had to document any changes as change requests to the paying customer, who then had to go back to the onsite customer (the third-party architect), who then came back to us, and the politics would commence.

The third item here is especially telling. One of the biggest challenges I see in a professional services team doing agile is the project’s financial arrangement. A fixed-fee project seemed like a good arrangement at first, and it probably would have been somewhat OK had we done a better job up front estimating. But the best arrangement would be for the customer to understand agile and the value it can create in his own go to market strategy. He should understand the advantage of not spending his full development budget before his product ever lands at an end-customer. There is a huge benefit financially and strategically in understanding the core functionality an end-customer needs in order to give your customer money (minimum releasable functionality). If your customer understands this principle, he can pay you to develop 10% of his product’s planned functionality and then have paying end-customers funding all or part of the other 90%. The alternative is paying for all 100% himself, and then realizing that 80% of it is irrelevant, and that he’s missing another 60% of things his end-customers want (I’m making all of these numbers up, but you get the point).

If everything were perfect, I would see a fixed-seat project, where the customer simply pays for x number of full-time developers and testers, and then commits the resources from his own company to work with that dev team daily. There’s no scope, no milestones, and no fixed fee. The customer knows what he’s going to pay for development each week or month, but he doesn’t know what the product is going to look like. Most people will be uncomfortable with this. However, if they’re smart, they’ll gladly accept that uncertainty for the increased agility that will allow them to get early feedback from customers and develop the solution that makes money the fastest.

CRM 2011 RESTful Service – Now You can Know as Much as Ryan Tomayko’s Wife

Regarding REST-based web services in general, Ryan Tomayko wrote a stellar article back in 2006 called “How I Explained REST to My Wife.” It’s awesome – I wish all learning was this enjoyable.

It won’t help you write REST code any better – in particular, it probably won’t help you write better code for Dynamics CRM 2011’s Odata service, but it will help you envision the point of the RESTful interface in a way you otherwise probably wouldn’t have. If you’re like me, anyway. I read most descriptions of REST and immediately cut bait, thinking it’s going to be way too much work to understand it as much as I’d like to.

If the link doesn’t work, give it a few minutes and try again – I get some intermittent web service errors from the site, but it’s worth the  effort.

Social Cloud Platforms

Did you see Steve Yegge’s rant on Google and platforms (linked at the bottom of this article)? This isn’t the first time anyone has contrasted platforms and products, but it made me think and ask – what is the relationship between social networks and software platforms?

Steve Yegge’s main point in his book-length post is that a software giant cannot maintain a strong market position in today’s computing market by providing great products. While a great product can give a company success for some period of time, it’s almost impossible to consistently predict the products that customers will want next. In a marketplace with so many competitors and such a rapid pace of change, how could anyone possibly anticipate the future products that will drive the market? Companies can still be profitable with products, but the companies that will win the war for cloud/social/mobile supremacy will do it by building a platform, a hub for all users and developers to connect on and build upon.

The problem is that we [Google] are trying to predict what people want and deliver it for them … You can’t do that.  Not really.  Not reliably.  There have been precious few people in the world, over the entire history of computing, who have been able to do it reliably.  Steve Jobs was one of them.  We don’t have a Steve Jobs here.  I’m sorry, but we don’t … Bezos [CEO of Amazon.com] realized that he didn’t need to be a Steve Jobs in order to provide everyone with the right products:  interfaces and workflows that they liked and felt at ease with.  He just needed to enable third-party developers to do it, and it would happen automatically.

In other words, Steve Yegge is saying that a company can obtain the innovation that will keep it relevant by crowd-sourcing its functionality – by building a platform rather than a simple product, so that developers outside of the company can build the functionality that they think is important. It’s essentially letting the market predict the needs of the market. This is how Facebook became so huge, Yegge says – without Angry Birds, Mafia Wars, and Farmville, Facebook would not be what it is today. I would add that without the Facebook badges on websites all over the internet, integrating websites with Facebook, and without stores integrating online shopping with Facebook, Facebook would not be what it is today. Facebook is a platform for a very diverse set of programs that want to interact with Facebook users.

But there’s a key dependency in any platform’s success, and my last sentence there touches on it –  users. A platform by itself isn’t going to be successful because it’s a great software platform. Yegge lightly touched on this point when he said a company building a platform needed a “killer app.” Continuing with the Facebook example, there was a key requirement for the platform to be a success – it had a ton of users. It wasn’t the quality of the APIs or the insightful planning of system architecture (if any such thing exists – my cursory reading has led me to doubt whether that’s the case at Facebook) that made Facebook so attractive to retail outlets, game developers, and websites all over the internet. No, it was the astronomical user base. Most companies, even those who didn’t really use it as a platform, would hear the mantra “if you’re not on Facebook, you need to get on there” weekly. The sheer volume of users was the major compelling reason that people wanted to come and integrate with Facebook; Facebook was just smart enough to capitalize on their huge popularity by developing a platform-based strategy that let every company in the world that wanted to come interact with their users.

Coming Full Circle

That heading is supposed to be funny, since we’re talking social networks and I’m quoting an article about Google+. Hey, +1 this if you laughed :)! Oh wait, I don’t have a +1 badge on my blog. It’ll come, I think.

So, Google is trying to build a social network, and to do so, it needs to have a social relationship with developers. Yegge’s main argument in his article is that they have failed in that area – it’s a social network for users, but the software itself doesn’t have the service oriented architecture necessary to attract any critical mass of enthusiastic developers. It’s like having a big party and only inviting dudes, or only girls – it failed to invite the people that would keep the other invited guests interested, keep them at the party. This is a principle that is operating at a heightened level in software development today – there are three parties in the value chain. There are users, who come online to socialize, the platform, which provides the users the place and tools to do so, and the developers, who build new functionality to keep that place and those tools relevant and interesting. This is very different from the old model, where you had the software and the users, who used the software one person at a time. There was no inherent value in the presence of a community – a software’s value was measured solely by the number of licenses it sold.

Is this an over-simplification? For example, you might say, surely people get online to do things other than socialize. There still need to be products that do things other than connect people. My answer: honestly, I’m not sure. Give me an example of something that you do online that isn’t “social” – and then ask yourself whether or not that activity, if it stays isolated from the social atmosphere that’s spreading across the entire web, doesn’t look profoundly old-fashioned in a couple of years. Any information that’s important or interesting enough for people to talk about it online (and, as you probably know, that threshold is pretty low) becomes much easier to find.

My feeling is that this new dynamic is changing the very nature of software. For one thing, software is becoming simpler – especially the kind found on the social web, allowing for developers to create valuable applications faster. But more importantly, the social web shortens an application’s feedback loop, allowing (and requiring) developers to adapt more quickly to the market’s response to their applications. This is essentially a type of collaboration – it may not be there quite yet, but perhaps soon the feedback loop will truly be short enough to call the interactions between consumers and developers “collaboration.” Interestingly enough, “co-creation” of products is an essential tenet of doing business in the social era.

To read Yegge’s article, click here – you have to be logged in to Google+ to see this

Do you have thoughts on platforms and the way social platforms like Facebook are changing the way we interact with technology?