Sonoma Partners Microsoft CRM and Salesforce.com Blog

The Best Method to Demo Mobile Applications

Today's post is written by Brian Kasic, Principal Consultant at Sonoma Partners.

Mobile applications are becoming more and more prevalent within CRM projects.

Seeing how they function is critical to the success of any mobile deployment. However, in my experience, screen prints seem to be the standard way of showing functionality or training end users on how a mobile CRM application works. Typically users learn how to use mobile apps by interacting with them. They should be intuitive and straight forward. Demoing an app or training end users on how to use an app should be just as intuitive. However, getting started can sometimes be challenging, especially when there is a process involved or if end users are seeing the app for the first time.  

At Sonoma Partners, we utilize a product called Reflector 2 to mirror your phone via Bluetooth on your computer. The set-up and steps to reflect your app to your computer are simple. The Reflector license pricing is reasonable and worth the corporate license fee. By mirroring the app on your computer, you can quickly create re-usable videos to assist end users when they are first starting to work with their new mobile CRM applications.

Here are the steps for getting started on an iPhone:

  1. Download a trial of Reflector 2 to your computer. If you end up liking it the pricing and licensing can also be found on the website.
  2. Launch Reflector 2 on your computer.
    1

  3. Next make sure your computer Wi-Fi and your phone Wi-Fi are on the same network. This is important and can be where you run into trouble. Also be aware that if you get bounced off of Wi-Fi, you need to start over again. I’ve seen Reflector need to be completely closed and restarted to begin a new session when bounced off of Wi-Fi.

  4. From your phone find your Bluetooth button. This is can be done by swiping up from the bottom of an iPhone. Then Click “AirPlay”.
    2

  5. Within Airplay find your laptop and swipe the mirroring button.
    3

  6. This will Mirror your content from your phone screen to your computer screen without wires.

  7. You can move the phone on your computer with your mouse by hovering over it and dragging it to another part of your screen.

From here, customers and end users can see the actions on your phone directly on your computer. I’ve given demos showing data being entered into the phone then immediately refreshed my CRM online environment showing the data being updated. You can also demonstrate voice activation on the phone and the time savings it can provide to enter notes or activities. Both of these demo tricks of showing the mobile app in action have fostered very positive feedback from the audience.

Android installation instructions can be found here.

Have a question about mobile CRM applications? We're here to help.

Topics: Enterprise Mobility

Microsoft Text Analysis and CRM–Tracking Client Sentiment

Microsoft has released a set of intelligence APIs known as Cognitive Services which cover a wide range of categories such as vision, speech, language, knowledge and search.  The APIs can analyze images, detect emotions, spell check and even translate text, recommend products and more.  In this post I will cover how the Text Analysis API can be used to determine the sentiment of your client based on the emails they send.

The idea is that any time a client (Contact in this case) sends an email that is tracked in CRM, then we will pass it to the Text Analysis API to see if the sentiment is positive or negative.  In order to do this, we will want to register a plugin on create of email.  We will make the plugin asynchronous since we’re using a third party API and do not want to slow down the performance of the email creation if the API is executing slowly.  We will also make the plugin generic and utilize the secure or unsecure configuration when registering the plugin to pass in the API Key as well as the schema name of the sentiment field that will be used.

Below is the constructor of the plugin to take in either a secure or unsecure configuration expecting the format of “<API_KEY>|<SENTIMENT_FIELD>”.

Next is the Execute method of the Plugin which will retrieve the email description from the email record and pass it to the AnalyzeText method.  The AnalyzeText method will return the sentiment value which we will then use the populate the sentiment field on the email record.

Then we have the AnalyzeText method which will pass the email body to the Text Analysis API which then returns the sentiment value.

And finally the classes used as input and output parameters for the Text Analysis API.

Now register the plugin on post-Create of Email with the Text Analysis API Key and the schema name of the sentiment field either in the secure or unsecure configuration

image

Now when an email is created in CRM, once the asynchronous job is complete, the email record will have a Sentiment value set from a range of 0 (negative) to 1 (positive).

image

The sentiment field on the email record can then be used as a calculated field on the contact to average the sentiment values of all the email records where the contact is the sender to track the sentiment of your clients based on the emails they send you.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Integrating QuickBooks and Dynamics CRM

Today's post is written by Rob Jasinski, Development Principal at Sonoma Partners.

We recently needed to integrate QuickBooks Desktop with our Microsoft Dynamics CRM solution, specifically for invoices.

Since we have invoices that are generated from data that originates in CRM, our current process had us generate a report in CRM, then manually create the invoice in QuickBooks and CRM. We wanted to automate this process.

For the integration, we wanted to use Microsoft Integration Services and create an SSIS package that we can schedule to run on a nightly basis to create invoices in QuickBooks from data generated in CRM.

The first thing we needed to do was to choose a tool that would allow us to connect to, and access the data, stored in QuickBooks. We looked at many tools, but the one thing we found in common was that every tool required a proxy to be running on the QuickBooks server (if someone is aware of a way to interface with QuickBooks directly, without the use of a proxy, please feel to leave a comment in the comments section below).

Then the SSIS package communicated with QuickBooks via this proxy, so it wasn’t a direct connection from SSIS to QuickBooks. So if the proxy wasn’t running, a connection to QuickBooks couldn’t be established. Finally we chose to use the QuickBooks Desktop connector from CData as it seemed to meet all of our needs.

In the following example, we’ll give a brief demo of setting up an SSIS package to create an invoice in QuickBooks.

The first thing was to create a connection to the QuickBooks server (don’t forget the proxy application must be running on the QuickBooks server). The only required fields are the URL (of the QuickBooks server), user name, and password.

  1

Then I setup a simple data flow task that queries invoice data from our CRM system and passes it to the CData QuickBooks destination component, which then will create the Invoice and Invoice detail records in QuickBooks.

2

When creating an invoice in QuickBooks there are a couple of things to note. First, there are some required fields that need to be passed in, and the invoice must have at least one detail record. At first this posed a problem for me, in that I was hoping to first create the invoice then add detail lines later. Then I discovered there is a field on invoice called ItemAggregate, which allows you to pass in one or more invoice detail records in an XML format, essentially creating the invoice and all detail records in one call. Below is an example of ItemAggregate data:

<InvoiceLineItems>
<Row><ItemName>Professional Fees - Consultant</ItemName><ItemDescription>Consultant</ItemDescription><ItemQuantity>210.7500</ItemQuantity><ItemRate>10.00</ItemRate><ItemAmount>2100.75</ItemAmount></Row>
<Row><ItemName>Professional Fees - Sr. Consultant</ItemName><ItemDescription>Sr. Consultant</ItemDescription><ItemQuantity>84.0000</ItemQuantity><ItemRate>15.00</ItemRate><ItemAmount>1230.00</ItemAmount></Row>
</InvoiceLineItems>

Once all the detail records and all required fields were passed in, the invoice was successfully created in QuickBooks. Please note that during the last step, I was logging all errors returned by QuickBooks into an error log table. This allowed me to do some trial and error runs of creating invoices that helped me determine which fields were required as those were returned as errors.

I hope this small introduction to integrating Dynamics CRM with QuickBooks can help kick start any similar projects you’ve been thinking about. Have a question about integrating QuickBooks with Microsoft Dynamics CRM? We're here to help.

Topics: Microsoft Dynamics CRM

Support Work is Never Done

Today's post is written by Kristie Reid, VP of Consulting at Sonoma Partners.

In order for your CRM deployment to be successful, it is often what you do after go-live that is the determinant factor.

CRM systems are different than other application deployments in that they must continue to evolve after launch. If not, the system quickly becomes stale and user adoption will continue to decrease.

We often get asked to provide guidance on what types of resources will be needed once a CRM application is up and running. Our answer: it depends! Yes, we’ve seen done this hundreds of times and yes, we have some best practice guidelines. But, we prefer to work with your unique organization to determine exactly what it will take to keep your CRM application widely used, and successful, after you go live.

You aren’t reading this blog to hear that you need to hire a consulting firm to identify your post-deployment resources. So here are some general guidelines we use (please note a lot of these roles are not full-time positions but rather shared across multiple applications):

2016-06-13_8-47-48

Most of the roles are self-explanatory for IT organizations, but one resource that often gets overlooked is the Product Manager.

Similar to Product Managers for any off-the-shelf applications, your CRM Product Manager has the final say for what features get released when based on business criticality. However, the hidden talent that this individual also has to possess is the ability to continually sell your CRM system to the organization.

That selling occurs from the user level where new people are being introduced and trained on the system, all the way to the executive level each time a new VP of Sales comes onboard and wants to understand where their sales people are spending their time. Don’t overlook this person – they will be the champion of your CRM application to make sure it’s usefulness is embraced for years to come!

Do you need help shaping your support strategy? Contact us to learn more.

Topics: CRM Best Practices

Inside Edition: How Sonoma Partners Uses CRM to Track Time

Today's post is written by Matt Weiler, a Principal Architect at Sonoma Partners.

As a consulting company, tracking time spent on projects is critical to billing our customers correctly, making sure people are busy, and making sure we're estimating accurately. In Grapevine (our internal name for CRM), Time has relationships to:

  • Projects (which is the level the time is billed at)
  • Items (which is the unit of development or customization work that has to be done)
  • Project Task (which bundles related time together so we can see how much time was spent developing vs. testing vs. designing, etc.)
  • Cases, in addition to the actual amount of time and assorted other fields.

We've covered Time entry on our blog in the past where you can see some screenshots of how this process worked in our CRM 4.0 environment.

As you can see, for something that is a relatively simple idea (what did I do during these 15 minutes?), becomes a much more complicated process because of the way we want to use the data.

One of our first cracks at making this process easier was adding time-related fields to the Item entity. Most Time records are related to an Item in some way  (development, testing, defects, design, etc.), so this was a way to easily enter the amount of time and the Task being performed, and a plugin behind the scenes would fill in fields like the Item and Project. While this worked well for some scenarios, others still required entering in the full Time form. This includes any time not related to an item (internal meetings, time spent writing specifications, etc.). In addition, at the time, most of us kept track of our time either on Excel spreadsheets or (GASP!) on pen and paper. It was kind of a laborious task to enter in time every week, especially if you waited till the beginning of the next week when time must be entered and finalized.

So, as an intern project, our developer Mike Dearing created what is now known as Time Buddy. Time Buddy was designed to make the process of tracking, entering, and reviewing time much easier and faster. Here's a quick look:

1

It's a Windows desktop app, so it only works on Windows PCs, and it has be installed everywhere, so there's a bit of a maintenance downside over a centralized web site. However, it has built-in timers, connects directly to Grapevine to pull back active Projects, Items, and Project Tasks, has a bunch of great time savers like the ability to split or join multiple records, and the ability to import your weekly calendar from Outlook, thus saving the entry time for non-item based Time entries. And it caches data offline, so as long as you connect occasionally, you can track time while not connected to the Internet.

As Sonoma Partners continued to grow, we had more and more non-.NET developers using Macintosh instead of Windows PCs, so our next step was to add an editable grid inside CRM. This not only gave our non-Windows users a quick entry ability, but also we incorporated the grid into a larger CRM dashboard that broke down time entries by day and by project, making it easier to review the entered time and validate simple mistakes haven't been made before submitting the time for final approval.

1

Our latest updates have been in response to a more diverse set of users utilizing Time Buddy. As we’ve added an iOS practice and graphic and UX designers, those people are utilizing Macs day to day, and have had to log time the old fashioned way. When we thought about how to give them an easier way to enter time, we took a step back and thought it also might be cool if we had an iOS app to allow time entry as well. So, we created a set of web services to abstract the time entry process from CRM and utilized those services to build our new clients. We’ll also be looking to update the Windows version of Time Buddy to utilize the same services. Thus we're shielded a little bit from CRM upgrades and we can more aggressively use new features or APIs in CRM without having to update a bunch of apps, and the external time entry is all routed through the same place.

The history of Time entry at Sonoma is, I think, a classic example of the crawl, walk, run CRM strategy that makes sense:

  1. Identify the data you'd like to start tracking
  2. Build out a basic implementation of a way to track and report on that data
  3. Identify inefficiencies through talking to employees or looking at the data you've already collected
  4. Develop targeted apps and websites to make the process easier, more efficient, and increase data reliability
  5. Repeat steps 3 & 4 as your business, process, and/or people change
Topics: Microsoft Dynamics CRM

Building CRM Web Resources with React

Web Resource Development

Microsoft Dynamics CRM has allowed us to develop and host custom user interfaces as Web Resources since CRM 2011.  Since then, the web has exploded with JavaScript frameworks.  In addition, browsers have started to converge on standards both in JavaScript object support and CSS.  In short, its a great time to be building custom user interfaces on top of Microsoft Dynamics CRM.

Today we’ll be working with React, an emerging favorite in the JavaScript world.  React’s key benefits are its fast rendering time and its support of JSX.  React is able to render changes to the HTML DOM quickly, because all rendering is first done to JavaScript objects, which are then compared to the previously generated HTML DOM for changes.  Then, only those changes are applied to the DOM.  While this may sound like a lot of extra work, typically changes to the DOM are the most costly when it comes to performance.  JSX is a syntax that combines JavaScript and an XML-like language and allows you to develop complex user interfaces succinctly.  JSX is not required to use React, but most people typically use it when building React applications.

The Sample Application

To demonstrate these benefits, we’ll build a simple dashboard component that displays a list of the top 10 most recently created cases.  We’ll have the web resource querying for new cases every 10 seconds and immediately updating the UI when one is found.

CaseSummary

The files that I will be creating, will have the following structure locally:

CaseSummary/ 
├── index.html 
├── styles.css 
├── app.jsx 
└── components/ 
    ├── CaseSummary.jsx     
    ├── CaseList.jsx 
    └── Case.jsx

However, when we publish them as web resources in CRM, they will be simplified to the following:

demo_/
└── CaseSummary/ 
    ├── index.html 
    ├── styles.css 
    └── app.js

Other than including the publisher prefix folder, the main change is that all of the JSX files have been combined into a single JavaScript file.  We’ll step through how to do this using some command line tools.  There are a few good reasons to “compile” our JSX prior to pushing to CRM:

  1. Performance – We can minify the JavaScript and bundle several frameworks together, making it more efficient for the browser to load the page.
  2. More Performance – JSX is not a concept that browsers understand by default.  By converting it to plain JavaScript at compile time, we can avoid paying the cost of conversion every time the page is loaded.
  3. Browse Compatibility – We can write our code using all of the features available in the latest version of JavaScript and use the compiler to fill in the gaps for any older browsers that might not support these language features yet.
  4. Maintainability – Separating our app into smaller components makes the code easier to manager.  As you build more advanced custom UI, the component list will grow past what I am showing here.  By merging multiple files together, no matter how many JSX files we add to the project we just need to push the single app.js file to the CRM server when we are ready.
  5. Module Support – Many JavaScript components and libraries are distributed today as modules.  By compiling ahead of time we can reference modules by name and still just deploy them via our single app.js file.

Exploring the Source Code

The full source code for the example can be found at https://github.com/sonomapartners/web-resources-with-react, but we will explore the key files here to add some context.

index.html

This file is fairly simple.  It includes a reference to CRM’s ClientGlobalContext, the compiled app.js and our style sheet.  The body consists solely of a div to contain the generated UI.

app.jsx

Now things start to get more interesting.  We start by importing a few modules.  babel-polyfill will fill in some browser gaps.  In our case it defines the Promise object for browsers that don’t have a native version (Internet Explorer).  The last three imports will add React and our top level CaseSummary component.  Finally we register an onload event handler to render our UI into the container div.

components/CaseSummary.jsx

CaseSummary is our top level component and is also taking care of our call to the CRM Web API.  This is also our first look at creating a component in React, so let’s take a look at each function.  React.createClass will take the object passed in and wrap it in a class definition.  Of the five functions shown here, four of them are predefined by React as component lifecycle methods: getInitialState, componentDidMount, componentWillUnmount and rendergetInitialState is called when an instance of the component is created and should return an object representing the starting point of this.state for the component.  componentDidMount and componentWillUnmount are called when the instance is bound to and unbound from the DOM elements respectively.  We use the mounting methods to set and clear a timer, which calls the loadCases helper method.  Finally, render is called each time the state changes and a potential DOM change is needed.  We also have an additional method, loadCases where we use the fetch API to make a REST call.  The call to this.setState will trigger a render whenever cases are loaded.  We definitely could have made this component smarter by only pulling case changes, but this version demonstrates the power of React by having almost no impact on performance even though it loads the 10 most recent cases every 10 seconds.

components/CaseList.jsx

By comparison CaseList.jsx is pretty straight forward.  There are two interesting parts worth pointing out.  The use of this.props.cases is possible because CaseSummary.jsx set a property on the CaseList like this: <CaseList cases={this.state.cases} />.  Also, it is important to notice the use of the key attribute on each Case.  Whenever you generate a collection of child elements, each one should get a value for the key attribute that can be used when React is comparing the Virtual DOM to the actual DOM.

components/Case.jsx

The simplest of the components, Case.jsx outputs some properties of the case with some simple HTML structure.

Compiling the Code

We’re going to start with using NodeJS to install both development tools and runtime components that we need.  It is important to note that we’re using NodeJS as a development tool, but it isn’t being used after the code is deployed to CRM.  We’ll start by creating a package.json file in the same folder that holds our index.html file.

package.json

After installing NodeJS, you can open a command prompt and run “npm install” from the folder with package.json in it.  This will download the packages specified in package.json to a local node_modules folder.  At a high level, here are what the various packages do:

  • webpack, babel-*, imports-loader, and exports-loader: our “compiler” that will process the various project files and produce the app.js file.
  • webpack-merge and webpack-validator: used to help manipulate and validate the webpack.config.js (we will discuss this file next).
  • webpack-dev-server: a lightweight HTTP server that can detect changes to the source files and compile on the fly.  Very useful during development.
  • react and react-dom: The packages for React.
  • babel-polyfill and whatwg-fetch: They are bringing older browsers up to speed.  In our case we are using them for the Fetch API (no relation to Fetch XML) and the Promise object.

The scripts defined in the package.json are runnable by typing npm run build or npm run start from the command prompt.  The prior will run and produce our app.js file and the latter will start up the previously mentioned webpack-dev-server.  Prior to running either of them though, we need to finish configuring webpack. This requires one last config file to be placed in the same folder as package.json. It is named webpack.config.js

webpack.config.js

As the file name implies, webpack.config.js is the configuration file for webpack.  Ultimately it should export a configuration object which can define multiple entries.  In our case we have a single entry that monitors app.jsx (and its dependent files) and outputs app.js.  We use the webpack.ProvidePlugin plugin to inject whatwg-fetch for browsers that lack their own fetch implementation.  We also define that webpack should use the babel-loader for any .jsx or .js files it encounters and needs to load.  The webpack-merge module allows us to conditionally modify the configuration.  In our case we are setting the NODE_ENV environment variable to “production” for a full build and turning on JavaScript minification.  Finally we use the webpack-validator to make sure that the resulting configuration is a valid.

Deploying and Continuing Development

At this point all of the files should be set up.  To deploy the code, you would run npm run build and then deploy index.html, app.js, and styles.css as web resources to CRM. 

If it becomes tedious to keep deploying app.js to CRM as you make small changes, you can set up an AutoResponder rule in Fiddler to point at the webpack-dev-server.  Once this rule is in place, when the browser requests files like index.html and app.js from the right subfolder of the CRM server, Fiddler will intercept the request and provide the response from wepack-dev-server instead.  This way you can just save your local JSX files and hit refresh in the browser as you are developing.  Of course you need to be sure that you have started wepack-dev-server by running npm run start from the command line.  I have included an example for the rule I set up for this demo below:

fiddlerAutoResponder

With that you should be set to start building your own CRM Web Resources using React!

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2013 Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

Analyzing Audit Logs using KingswaySoft

If you have ever looked into analyzing audit log records in Dynamics CRM, you know how hard it can be.  Using the API there isn’t a good way to retrieve all the audit log records for a specific entity.  You can only either retrieve all the changes for a certain attribute or retrieve all the changes for a specific record.  If you’re on-premise and have access to the database, you can get to the audit detail records but you will find that the data is very hard to parse through.

Thanks to the wonderful folks at KingswaySoft, with version 7.0, this is no longer the case.  With KingswaySoft v7.0, audit details can easily be retrieved for a specific entity and then can be dumped into a file or a database for further reporting or analysis.

In order to accomplish this, first you will need to make sure you have the SSIS Toolkit installed and then download KingswaySoft v7.0 here.  Then open up Visual Studio and create a new Integration Services project.

clip_image002

Next add a Data Flow Task and drill into it.

clip_image004

Then we will set up a Dynamics CRM Connection using the Connection Manager.  In the Connection Manager view, right-click and select “New Connection”.

clip_image006

Now select the DynamicsCRM connection and click Add

clip_image008

This will pop open the Dynamics CRM Connection Manager which will allow you to connect to your Dynamics CRM organization.

clip_image010

Now use the SSIS Toolbox view to drag the Dynamics CRM Source component onto the canvas.

clip_image012

Double-click the Dynamics CRM Source component to pop open the editor.  Select the Connection Manager that you created earlier and set AuditLogs as the Source Type.  In the FetchXML text editor, write a fetch xml query to pull back the records of an entity where you want to retrieve audit details from.  In my example I’m retrieving 25 account records with my Fetch XML query.

image

Select Columns on the left and pick the columns you would like to be a part of your report.  In my example I’m going to use action (Create, Update, Delete, etc), the objectid and objecttypecode (the record that was changed), and the userid and useridname (the user that triggered the change).

clip_image016

The Dynamics CRM Source component will have two outputs, one for the header audit record and one for the list of audit detail records.  In my example I want to join these two outputs into one dataset so I can display both sets of data in the same report.  In order to do this we will need to drag two Sort components onto the canvas and then connect each output into the separate Sort components.  The result should look something like this:

clip_image018

Now double-click the first Sort to open the editor.  Select the auditid as the sort attribute as it is the unique key to join the two datasets together and check the “Pass Through” box for all the other columns that you want to use in your report.

clip_image020

Now double-click the other Sort component and perform the same steps.

clip_image022

Next drag the Merge Join component onto the canvas, connect the two outputs from the two Sort components into the new Merge Join component and then double-click the Merge Join component to open the editor.  Select Inner join as the Join type and then select any columns you want in your report and map them in the bottom pane.

clip_image024

Now we need to drag a Derived Column component onto the canvas and connect the output from the Merge Join into the Derived Column component.  This component needs to be used as we’re going to output the data into a CSV file so the oldvalue and newvalue columns need to be converted from a DT_NTEXT to a DT_TEXT.  Open the editor for the component and set the expression to convert ‘oldvalue’ to DT_TEXT using the 1252 codepage and repeat the same for ‘newvalue’.

image

Lastly, use a Flat File Destination to output the audit records into a CSV file that can be opened in Excel.  The screenshot below is the columns I used for my output file. 

image

Now your Data Flow should look like the following:

image

Then you can run the SSIS package and you should get an output file that displays all the audit records for the first 25 retrieved accounts.  The output will show the name of the user that made the change, the field that was changed, the old value, the new value as well as if it was a Create or Update.

image

So there you have it!  Thanks to the wonderful KingswaySoft toolkit, it is now possible to extract audit logs into a readable output that can be analyzed as needed.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2015 Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

How Professional Services Firms Can Crawl, Walk, Run with CRM - Part One

Today's post is written by Bryson Engelen, a Sales Engineer at Sonoma Partners.

Professional Services firms know how critical it is to their business to implement or improve a CRM system.

But oftentimes the way they approach an implementation can doom them to failure. One of the major problems they encounter is trying to please everybody all at once. Their“boil the ocean” approach overwhelms users and delivers too much too soon.  Instead, it’s better to think of a CRM as a living, breathing thing, and to build upon it over time based on how people ACTUALLY work, not how they THINK they work. 

This can only be done by approaching a CRM implementation with a well-thought-out crawl, walk, run approach that keeps the users at the center of an ongoing conversation. 

In this series, we will take a look at the common CRM use cases for professional services firms as they try to master the basics (crawl), start using CRM on a more strategic level (walk), and then leverage CRM as a platform to solve business problems that would otherwise require custom solutions (run).  Think of these suggestions in each phase as an a la carte menu. In some instances, the phase designations may not perfectly suit your firm, so you may implement what is described as a “walk” item in your crawl and vice versa.   Let’s look at some of the crawl items for professional services firms in this post, and explore the “walk” and “run” CRM workloads in subsequent posts.

When discussing a “crawl” item on your CRM list, these should be basic things related to pretty standard CRM functionality that are relatively easy to implement, but offer very high value.  Typically, these are tactical, day-to-day quick wins that will make your users everyday jobs easier. By getting these simpler things right, you can build goodwill around CRM and inspire confidence in your firm that it is the right technology to be building your practice around. 

Here are some examples of how your firm can be smart about relatively simple things.

Account/Client Management:

Modeling Clients in CRM can be challenging since most CRM systems have Account record types that represent companies and Contact record types that represent people. These definitions don’t always fit how your firm thinks about clients.  Additionally, an individual client who is a person, may also have a relationship to a company that is a client record for your firm, like being on the Board of a company. CRM doesn’t traditionally do well at modeling that out-of-the-box.  This can be solved in a variety of ways, like having different Account Types and/ or making the concept of “Client” separate from the concept of “Account.”

Account/Client Segmentation:

Firms often segment their Clients by a dizzying array of categories that overlap and often contain redundancies. Client, former client, target, referral source, and practice all relate to the company’s relationship to the firm; industry designations, public/private, and geographic designations are parameters of the company itself (regardless of whether it is a client, for example.)  The model must work for Sales, Marketing, Delivery, and other Business functions and be simple enough for all of those stakeholders to understand and use.

Contact Rules of Engagement:

Many firms actively refute the concept of “ownership” at a company or person level (clients “belong” to the firm, not to a partner) but will designate a “Relationship Partner” who is responsible for the client.  However, individual practitioners will still refer to the company as “my client.” For this reason, it is often helpful to have a business process which defines the rules of interacting with the firm’s Contacts and Accounts. For example, the Partner in Charge (PIC) of a Client may want to be consulted prior to somebody reaching out to said Client regarding new services, marketing efforts, etc. We typically don’t automate this in CRM, but it should be discussed when working with professional services clients to ensure the data in CRM can at a minimum facilitate the process.  In order to really get your firm to fully realize its potential to cross-sell and upsell and provide ideal levels of service to your client, you really must tackle the rules of engagement with your clients.

Knowing Who Knows Whom:

Professional Services firms are highly relationship driven and generate a vast majority of their business from their relationships. Many CRM systems have rudimentary tools for tracking relationships, but they typically only go one level deep (Partner A knows Client X) and don’t show the web of relationships, nor strength of those relationships (Partner A relationship with Client X is strong AND Client X’s relationship with Prospect 1 is strong, etc.).  Sonoma Partners has developed a custom visualization tool called Relationship Mapper to help consume relationship data easier. Relationship Mapper allows you view connections between people, companies, and staff on a relationship web, displaying the nature and strength of each relationship a hover-over, and allows you to pivot on different individuals on the relationship map.  This makes it easy to quickly and clearly see the relationships tied to a Client, plot how to leverage a social network by jumping from contact to contact, and see which customers will provide the maximum value for your networking time and efforts.

Another aspect of relationship tracking is tracking individuals as they move from one organization to another.  This is related heavily to the marketing segmentation point we talked about on the Account/Client Management slide.  Professional Services firms often want to track a contact throughout their employment, starts/stops, former employment relationships, etc. and track alumni, which can be is challenging as former User records become Contact records.  Finally, a professional services firm might have existing tools that track relationships, oftentimes by monitoring email traffic and other Outlook data between your firm and the Client.  Getting relationship tracking right early on in your CRM implementation generates a lot of good will with the system, and there are many nuances and tricks to ensure it is successful.

Sales Processes:

First, let’s recognize a few stigmas and caveats.  Few professional services firms have a dedicated sales team and a lot of “sales management” is performed by “doers” in professional services, who see tracking Opportunities in CRM as too much overhead.  Also, many partners hate the word sales and focus on relationships.  Time and time again, professional services firms will say “we don’t sell, we build relationships” which is true and not true.  While your firm’s culture may hate the word “sales,” it’s wise to think of and build processes around how you acquire business and clients in “sales” terms from an organizational perspective, even if you market those efforts internally as something else. 

Seller/doers often chafe at what they see as restrictive processes around entering sales information and feel tracking Opportunities takes time and is irrelevant.  To solve for this, we need to provide the users with a quick way to enter and manage this information, like the ability to track updates from Outlook or mobile (easily, with minimal typing) and by making sure there is a simple, documented process in place for the many types of sales your firm does.  Few firms have a defined sales process that they follow when working on bringing in new business. Getting staff to follow those best practices can be challenging (assuming they actually know those processes exist).  However, this is critical, and CRM should never be the driver for the sales process.  Rather, it is important to have a business conversation internally to establish what your “process(es)” looks like (or should look like) that is completely separate from the CRM system of choice.  After that business process is identified, CRM should be configured in such a way that supports your process(es).  Process is pluralized because there needs to be a strong distinction laid out between New Business Opportunities (net new clients), Expansion Opportunities (cross-sell), and Retention Opportunities (with any luck, upsell).  For firms to minimize cost of sale and maximize efficiency and ROI, they must be able to accurately measure and adjust business processes.  Pipeline tracking can also be useful for planning staffing needs for projects.  Partners might not care about the word “sales,” but they probably do care about knowing how many consultants/accountants/whatever they’re going to need in the next X months.

Mobility:

Perhaps the biggest key to success for professional services firms is getting mobile right, but very few of them do because they don’t have a mobile strategy.  Many firms just check to make sure their CRM system has a mobile app, but never really evaluate if that app aligns with their users’ needs and never put in the work to customize the app to fit user scenarios.  For mobile to be successful, mobile user groups and use cases need to be identified early on.  This can be done really well with ride alongs with users, seeing what they do when they are out of the office in the course of a normal business day.  Learning why a partner keeps a stack of Post-Its in their briefcase and incorporating that into a mobile app can really be the key to driving user adoption.  In addition to knowing your firm’s use cases, you must also know your device landscape, develop a strategy around device/software updates, consider your security needs, and develop a plan to revisit your mobile strategy on a regular basis.  Mobile consumption of CRM may evolve far more quickly than the rest of the CRM implementation, and you may need to iterate on mobile CRM every few months as the needs of your users change or are more clearly identified.  Your firm may also find that the native CRM app is overwhelming or limiting, and may want to consider a custom, use-case specific mobile app.

Those are just some of the most common things many professional services firms may need additional guidance on in the first phase of their CRM implementation. There are plenty of others, and there will be even more examples in the next part of our series around a phased approach to CRM when we talk about getting more strategic in the “walk” phase. 

Hopefully, this post is helping you recognize how nuanced a CRM implementation can be, which is why a phased approach works best. 

Simply standing up a basic system and walking away is a recipe for wasted time, money, and energy, and won’t build good will with your users.  If you want to hear more about how to implement CRM for professional services firms using a phased approach, feel free to Contact Us.

Topics: CRM for Professional Services

A Step by Step Guide to Create Your First Survey with Dynamics CRM 2016 Voice of the Customer

Dynamics CRM 2016 was recently released and with it a whole slew of new features and functionality.  A bunch of features were planned for the initial 2016 release, but for one reason or another were delayed.  This website is a very simple way of understanding what’s been released for primetime, versus what’s in preview, what’s in development, and what’s been indefinitely postponed.

One such feature that wasn’t immediately available at the release of 2016 was Voice of the Customer.  This is the ability to create, send, and monitor surveys from Dynamics CRM.  This feature is currently only available for CRM Online, and below I’ll go into more detail on how to get it enabled, and how to create your first survey.

Note that this feature is delivered through an integration with Azure Web Services. This means data will be flowing and queued through Azure in order to take any workload of delivering and capturing customer survey data off of your CRM system for the best possible performance.  This also means that there could be a delay between survey response from making it into CRM.

Below we’re going to go into an overview of enabling Voice of the Customer, and setting up and sending your first survey.  This post won’t go into everything that’s available with Voice of the Customer as there’s a lot to it, but will cover the basics. 

Enabling

To enable Voice of the Customer, simply log into the CRM Online Administration Center, and select the org you want to install VOTC to, and click on Solutions.  Then you’ll be taken to the page with the preferred solutions that you can install, and simply click on the “Install” icon to start the installation. 

image

image

Once the installation is complete in CRM navigate to Solutions and open up the Voice of the Customer solution.  Then check off “I agree to the terms and conditions” and click on “Enable Voice of the Customer”.  You’re now set to start configuring your first survey!

image

 

Survey Creation

The Voice of the Customer functionality allows you to add theming to your surveys.  To do so you just navigate to the Images and Themes area of the VOTC module. 

image

And from there you can go ahead and create an Image record and upload a logo that you want to use in your survey will be accomplished in later steps.  After you upload the logo and save the record you’ll be able to see a preview of the image.

image

You can also go to the Themes area and create a new Theme to use for your logo.  You have the ability to change the colors of most of the survey elements such as the header, navigation bar, and progress background.  I strongly recommend you make use of a UX engineer to help you pick your colors wisely so that they don’t clash too much.  If you wanted to get more advanced, you can even upload your own CSS to apply even more custom styles to your survey.

image

Now that you have your image and theme setup, you’re ready to create your survey.  Navigate to Voice of the Customer –> Surveys, and click New to create your new survey.  You’ll see on the survey form that there are a lot of options to configure your survey.  We’re not going to cover them all in this post but you’ll notice that the we’re able to apply the Image and Theme we created previously.

image

In order to actually start building out the survey questions, you need to change from the Survey to the Designer form.  You’ll notice that here there’s also the Dashboard form where you can see statistics about survey responses.  For now we’ll click on Designer and start creating some questions.

image

On the Designer form, you have the ability to add or delete pages in your survey via the buttons that appear underneath the vertical page layout on the left.  You can’t delete the Welcome or Complete page – those are required for all surveys.

image

When in the design mode, you’ll be able to drag question types from the right over onto the main pane in the middle.  When you hover over a question on the page, you’ll be able to delete the question, make quick edits inline on the page to the question label, or click the pencil icon to take you to a more advanced editor so you can change more settings for the question other than the label.

image

image

In the text box of the question (and of any label control on the survey), you can click on the (Pipe) dropdown to insert piped data into your survey.  We’ll see how this works with workflows later when we create a workflow to automatically send out the survey upon case resolution.  In this example, we’ll insert the case number into the survey question, and we’ll use the Other1 pipe to store this data (again, that’s setup when you create the workflow and we’ll discuss that in a later step).

image

image

Here’s what our welcome page looked like with all the pipes in it.  We want to make a very personalized experience for the customer as they take the survey.  I also threw the other pipes in there so you can see how we’re able to get as much data out of CRM as possible to personalize our survey for our customer.

image

Something else you can do to add logic to your survey is to create Response Routings.  An example of when you’d use a response routing is if you want a customer to fill out an additional question, if they answered a certain way on a previous question.  For example, you may ask the customer how they’d rate the experience with your company, and if they provide a low rating, you may want to display an additional question to gather more information on why they felt that way.  To get to response routings, click on the related records dropdown of your survey.

image

When you setup your response routing rules, you need to create Conditions and Actions for each Response Routing.  See below how we’re only showing the “Can you please provide us with additional information” question if the user responded 1 to the star rating question.  Otherwise we don’t show it.

image

After completing the above, your survey is ready to be published.  If you toggle back to the Survey form, you can click on the Preview button to see what the survey would look like to your end users.  When you’re all set, you can click on Publish so that the survey is now accessible externally.

Survey Automation and Results

Now that you have your survey setup, you can use it along with native CRM workflow to have surveys automatically sent out to your customers based on actions to CRM data.  For example, lets create a workflow that sends our survey automatically to the customer of a Case when the case is marked Resolved, asking them how their experience working with your support team was, so you can make improvements if needed, or provide recognition where deserved.

First off, create a new Workflow and make sure to have it run on update of the Case Status field.  Check to make sure the status has been updated to Resolved, and then add in a step to create a new email.  Your workflow should look similar to that below.

image

Now when editing the email step of the workflow, you’ll want to copy the value in the “Email Snippet” field of the Survey, and paste this into the body of the email step in your workflow.  Your email step may look something similar to the following.

image 

image

Notice in this email above that I’m making the use of the piped tokens (that I had placed in my survey earlier) with dynamic data from the Case record the workflow is running on.  It doesn’t matter what field from the record I’m on that I use within each pipe.  You’ll see that in the actual survey the user is taken to that the pipes are resolved to the actual data on the Case that was recently resolved. 

Make sure to Activate the workflow, and then you can go and test it out.  Once a Case is resolved in CRM, the email that’s send to the customer looks similar to the following.

image

And if the user clicks on the hyperlink to launch their survey, they’ll be taken to the actual survey.  As stated above, the pipes used in the survey are resolved to the actual data from the case.  You can also see that if I answer greater than 1 on the 5 star rating of my overall experience, that I won’t see the question asking me why I rated the overall experience a 1 based on the routing rules we setup earlier.

image

image

Also note that the survey has a responsive design so that if you’re accessing it from a mobile device such as a phone, the survey resizes to fit the screen appropriately.

image   image

Upon completion of the survey, and after the data from Azure syncs back to Dynamics CRM, you’ll be able to change to the Dashboard form on your survey record to see the results trickling back in from your survey. 

image

You can also navigate to Survey Responses off of the Survey to see the individual responses.  If you open up a response you’ll be able to see the individual questions and answers that were asked and part of that specific response.

Note:  The responses (including the question and answer) are stored in a first class Question Responses entity.  This means that if you wanted to take this one step further, you could create a workflow on the Question Responses entity, and if a Question Response record is created where a response is poor (e.g., where the customer rated the overall experience a 1 star), an email can be sent to the appropriate team to follow up on why that customer answered that question the way they did.

image

image

 

Gotchas

As I was working through and testing out my first survey, I ran into a few gotchas that I figured would be great to note down as I suspect others may run into these similar issues.

First off, when using Response Routings, if you want to only show a question when another question has a certain value (for example in my case where I wanted to show a text box if someone rated the service a 1), you probably don’t want the text box to appear when the customer hasn’t answered the rating question.  In other words, you don’t want the text box to appear when they initially load the page of questions.  You ONLY want it to appear when they rate your service a 1.  In order for this to happen, you have to make sure that on that specific question that you set the Visibility field to “Do not display” which is the default visibility of the question.

Next, I ran into an issue with the pipes in my survey not actually being populated with dynamic data from CRM.  It had turned out that when I was testing this out with my workflow, I had copied the Email Snippet of the survey to my workflow email body more than once.  This causes the Email Snippet and Piped data to break and after I removed the duplicate Email Snippet from my workflow email, the pipes began to work as expected.

Also note that if you want the updates you made to your survey to be live, you’ll need to publish your survey after making changes.  Simply saving it using the native CRM buttons will not publish it to Azure, but instead just save the updates in CRM.

Finally, if your survey responses aren’t being returned to CRM, navigate to the Voice of the Customer solution and make sure to click on the link to Trigger Response Processing.  Note that this could take up to 15 minutes to complete and for responses to appear in CRM.

image

For more information on Voice of the Customer, head over to Microsoft’s website.

Topics: Microsoft Dynamics CRM Microsoft Dynamics CRM 2016 Microsoft Dynamics CRM Online

How to Plan Test Data for Testing Mobile Applications

Today's post is written by Jen Ford, Principal QA at Sonoma Partners.

When brainstorming test data for testing out a mobile app, I go with what I learned in the first grade: The 5 W's - Who, What, Where, When, and Why.  In this blog post, I will address the “Who” and the “What".

The “Who” - Security

Security is extremely important to consider in a mobile app.  If you are connecting to a back-end system, you want to make sure that the security that exists on the back-end is mirrored in the mobile application.  This means that users shouldn’t see anything in the app that they can’t see in their own environment.  This is a great place to start looking for defects.  Not properly handling security within the app can lead to crashes and data loss when syncing to the back-end system.  For example, let’s consider a requirement for expense submission and approval.

Requirement: Users will create expenses and submit them for approval.  The manager will receive an e-mail summary of the expenses, and will approve the expenses.

  • All users should be able to create, submit, and view their own expenses.
  • A sales person cannot view anyone else’s expenses.
  • A manager can view their own expenses, and all the expenses of their direct reports.
  • No users should be able to approve their own expenses.
  • A manager can view, edit, and approve the expenses of all their direct reports.

Using these requirements, we can determine what permissions each role should have.  This is as simple as jotting this down in an excel spreadsheet or table, just like the below:

Expense Security Scenarios

ID #

Overview

Salesperson

Sales Manager

ES-01

Create new expenses

Yes

Yes

ES-02

Create new expenses on behalf of others

No

No

ES-03

View their own submitted expenses

Yes

Yes

ES-04

View others’ submitted expenses

No

Yes

ES-05

Update own expenses before submitting

Yes

Yes

ES-06

Update others’ expenses before submitting

No

Yes

ES-07

Update own expenses after submitting

No

No

ES-08

Update others’ expenses after submitting

No

Yes

ES-09

Approve their own expenses

No

No

ES-10

Approve others’ expenses

No

Yes

Now we know that there are 10 scenarios to test for each User Role. The next step for each Expense Security Scenario is to identify what test data needs to be set up to accomplish each test above.

The “What” - Test Data

Careful thought processing before starting testing is the key to smooth test execution. 

By putting forth the effort to plan out your test data, you will consider all scenarios up front.  This will lead to ease in execution, or the ability to bring in other people to test if the timelines are tight.  Planning your test data and setting it up, if it needs to reside in a system that is integrating with the application, can be done while development is going on.  The important thing here is to figure out what is happening on each page of the app. 

Consider our expense submission and approval example above.  First, let’s think about what expenses we will want to set up in our environment to successfully test the submission page (see the sample below).  In the analysis, you will see that there are scenarios that will validate the “happy path”, and scenarios that are expected to fail.  I recommend jotting down every scenario you can think of, even if it ends up being 100 different scenarios.  Then, when you are done, take a look back at it and determine what you do not need to do.

Maybe you have redundant scenarios or maybe you have scenarios that can be combined. In the chart below, take a look at scenario #2.  I could have positioned this as two different scenarios: one to make sure that an expense with an earlier date should be able to be entered, and one where an expense that is not a whole dollar amount should show the cents when submitted.  I combined them because I expect both to succeed.  If you identify your test data and flow for each page of your custom app up front, you will be able to quickly hit the ground running when it is time to test.

Expense Creation Scenarios for ES-01

ID #

Scenario

Date

Amount

Expected Result

ES-01-01

Add an expense with today’s date, and any $ amount.

[today]

$100.50

Expense added successfully.

ES-01-02

Add an expense with an earlier date, and any $ amount.

[yesterday]

$50.55

Expense added successfully.

ES-01-03

Add an expense with a negative amount.

[today]

-$75.00

Expense not added.  Error message displays.

ES-01-04

Add an expense >= $1,000.00

[today]

$1,234.56

Expense added successfully.

ES-01-05

Add an expense with a future date.

[future date]

$100.00

Expense not added.  Error message displays.

ES-01-06

Add a $0 expense.

[today]

$0.00

Expense not added.  Error message displays.

ES-01-07

Add an expense with a blank amount.

[today]

 

Expense not added.  Error message displays.

When you combine the two charts above, we can see the test cases flesh out.  We have 7 test cases for Expense Security Scenario ES-01.  Considering we need to test this across 2 different user types, we now have 14.  And there are nine Expense Security Scenarios to go!  Not every one of these will need 7 scenarios each.  If we were to flesh out ES-02, that one is quick for both the sales person and the sales manager: one test case each to make sure that they cannot create new expenses on behalf of others.

For each of the Security Scenarios, we’d repeat the exercise by going through and identifying the necessary tests. Excel is great for keeping track of these.

Record Counts: Online & Offline

Now we need to consider how much data we need to sync down to the device when it is online and when it is offline.  To do this, let’s make another chart and figure out what to expect.  Interviews with key stakeholders will help you answer these questions.  Test with too little data, and you can run into unanticipated performance issues.  Test with too much data, and you could be spending extra time performing unnecessary tests.

For our Expense Scenario above, I would ask the following:

  • How many expenses would a user submit per day?
  • How many users report to a sales manager?
  • How long should past data be available on the device?
  • How much of this data needs to be available offline?
  • Is there a scenario for a system administrator who will have access to all records in the system?
  • If so, how many users are there?

And we may get these answers:

Scenario

Response

# Expenses per User per day?

20 expenses / day

How many users report to a Sales Manager?

10 users

How long should data entered in the past be available on the device?

1 month

How much of this data needs to be available offline?

All data

Will a System Administrator use this app?

Yes

# of Total Users

100 users

Looking at these amounts, we can see that a sales manager will have significantly more data than a sales person, and that data needs to be available on the device for 1 month.  The requirements are the same, whether the device is online or offline, so:

  • Each sales person will then have a max of 20 expenses per day for 1 month (31 days) = 620 expenses
  • Each sales manager will then have this amount of data for 10 users = 6,200 expenses PLUS their own expenses: 620. Total expenses for a sales manager = 6,820.
  • A System Administrator will then have this amount of data for 100 users = 62,000 expenses

Now we know that when we're creating our test data for the mobile app, we will not only need to consider the various scenarios to test for data entry, but also how much data we need to pre-populate at the start to ensure that the devices can handle the load. We will be modeling real-life scenarios from the start of testing, which will uncover performance and usability issues early on.

Happy Testing!

Topics: Enterprise Mobility