In one of my recent Adobe SiteCatalyst (Analytics) “Top Gun” training classes, a student asked me the following question:
When should you use a variable (i.e. eVar or sProp) vs. using SAINT Classifications?
This is an interesting question that comes up often, so I thought I would share my thoughts on this and my rules of thumb on the topic.
As a refresher, SiteCatalyst variables like eVars and sProps are used to store values that break down Success Events and Traffic Metrics respectively. For example, if you have a metric for onsite searches, you should be setting a Success Event and if you want to see that Success Event broken down by onsite search phrase, you might use an eVar to see the number of onsite searches by search phrase. SAINT Classifications allow you to apply meta-data to eVars and sProps so you can collect additional data or group data values into buckets. For example, you might use SAINT Classifications to group onsite search phrases into buckets like “Product-related terms” or “SKU # terms,” etc…
However, there are many cases in which you have a choice to capture data in a variable (eVar or sProp) or to use a SAINT Classification. Let’s look at an example to illustrate this. Imagine that you have a website and many of your customers have a Login ID that they use prior to ordering products. You are passing the Login ID value to an eVar so you can see all of your Success Events (i.e. Searches, Orders, Revenue) by Login ID in your SiteCatalyst reports. One day your boss approaches you and says that she wants to see your website KPI’s by the City visitors live in and that City is one of the attributes your back-end folks have related to each Login ID. At this point, you have two choices, one is to have your IT folks pass in the City to a new eVar using the Login ID value (if they can’t do this in real-time you could also pass this to SiteCatalyst via DB VISTA). The other option is to upload the City value for each Login ID as a SAINT Classification of the existing Login ID eVar. Both of these options would meet the objective of your boss, but which one is the right approach?
If I were a betting man, I would guess that most of you mentally chose option#2 which treats City as a SAINT attribute of the Login ID eVar. Does that sound right? Why not? It saves you tagging work and helps you avoid working with IT, which usually has delays associated with it. However, would it surprise you to know that I would NOT choose option #2 in this case, and instead would pass the City to a new eVar? Before I tell you why, let me review some of the things I consider when making a decision like this:
Advantages of SAINT Classifications
- Conserves Variables – One of the key advantages of using SAINT Classifications is that they allow you conserve variables, especially eVars, which tend to run out before any others
- No Tagging Required – SAINT Classifications don’t require additional tagging
- Retroactive – SAINT Classifications are retroactive so if you mess up when assigning a value, you can always fix it later by simply updating the SAINT data or fixing your rules if using the SIANT Rule Builder. For example, if you incorrectly assign a campaign tracking code to a Campaign Name, you can easily updated this after the fact. If you had passed the campaign name to an eVar, there wouldn’t be much you could do to fix historical data. However, the retroactive nature of SAINT Classifications can also be a negative at times (more on this later)
Advantages of Variables
- Data Stored Forever – Once you pass data into a variable (eVar or sProp), it is there forever (for better or worse). This is useful if you want to forever document the value at the time a KPI took place
- sProp Pathing – If you are passing data to an sProp, you can enable Pathing on the variable to see the sequence in which values were collected. Unfortunately, Pathing is not available on SAINT Classifications in Adobe Analytics (though it is in Discover, now known as Ad Hoc Analysis)
- Data Feeds – Many companies use Data Feeds to export Adobe SiteCatalyst data to other data warehouses and Data Feeds only contain data that is organically passed into SiteCatalyst, which excludes SAINT data
As you can see, there is more than meets the eye when it comes to deciding which approach you should use when collecting data. Do you need data in a Data Feed? Do you need Pathing? Do you need to be able to update values after the fact? For each situation, I find the preceding items to be a useful checklist to keep handy.
And Now Back To Our Story…
So now that you have seen my list of considerations, can you see why I suggested using a new eVar for City in our scenario? In this case, the item I focused on was the retroactive nature of SAINT Classifications. In this case, if you were to treat City as a SAINT Classification of Login ID, things would probably work out ok initially, but might have issues in the long run. Let’s say that Adam Greco visits your site, logs-in using ID#12345 and then completes an order for $200. At some point you have uploaded a SAINT file that correctly associates Adam’s Login ID with the city of Chicago. At this point, you can use the SAINT Classification “City” report to pivot the data and see an order of $200 for the city of Chicago. However, now let’s imagine that Adam decides to move to San Francisco (something I have done twice in my life!). Your back-end data would at some point learn that Adam has changed cities, and the next time you upload your SAINT file, Adam’s Login ID will be associated with San Francisco. Since SAINT Classifications are retroactive, this will have the impact of changing all activity associated with Adam’s Login ID to look like Adam has always lived in San Francisco, even though all of his KPI’s to date were done in Chicago. This means that your “City” report is inaccurate since it is inflating metrics for San Francisco and deflating metrics for Chicago (and for those who say that the answer is to use Date-Enabled SAINT Classifications, I wish you luck as I have never seen a company have the time to keep those updated!).
This scenario shows why it is so important to review my list of considerations above. While it is a shame to have to waste an eVar for City, in this case when you can make an association between Login ID and City, using a new variable may be the right thing to do if you want to see what City the Login ID was associated with at the time that the KPI took place and lock that value in forever. In my experience, the retroactive issue is the one that I see companies make the most mistakes with and many don’t even know that they have made a mistake until I point it out to them. Therefore, I will share another rule of thumb I have learned over the years:
Consider whether the data attribute is inherent to the eVar/sProp value or whether it can change. If meta-data is inherent to the value being classified or it can change and it won’t disrupt your data, use SAINT Classifications. Otherwise, use a new variable. When I say “inherent,” I mean that it will most likely not change. For example, if one attribute you have for Login ID is “Gender,” there is a strong likelihood that this can be a SAINT Classification, since it is unlikely that this value will change for each Login ID (outside of a very complicated surgical procedure!). Another example might be birth date which will never change for each Login ID. However, if you have a loyally program and treat different Login ID’s as Basic, Gold or Silver members, that can easily change over time, so that would be a candidate for a new variable so you are documenting their status at the time that the KPI took place.
As you think about how many attributes you may currently be incorrectly storing via SAINT (it happens to the best of us), you may wonder how you will have enough variables to capture all of these attributes. Keep in mind that just because I am suggesting that you set variables instead of using SAINT for data that is affected by retroactivity, it doesn’t mean that you need to store each of these data points in their own variable. For example, if you decide to capture Member Status, City and Zip Code as variables instead of SAINT Classifications of Login ID, if they are all available on the same page (server call), you can concatenate them into one eVar (i.e. Gold Member|Chicago|60603) and then apply SAINT Classifications to that eVar. In this case, you are still capturing the actual value you need to make sure you are not burned by the retroactive nature of SAINT Classifications, but you can conserve eVars by capturing multiple values in one eVar and splitting out the data using SAINT later. In fact, if you capture the data in a methodical manner, you can even use RegEx in the SAINT Classification Rule Builder to do this automatically.
So there you have it. Some things that you should consider when deciding whether you should use a new variable or SAINT Classifications when collecting new data attributes in your Adobe SiteCatalyst (Analytics) implementation. If you would like to learn more tips like this about Adobe SiteCatalyst, consider attending my next Adobe SiteCatalyst “Top Gun” training class at our ACCELERATE conference in Atlanta this September. Thanks!
As I have mentioned in the past, one of the Adobe SiteCatalyst (Analytics) topics I loathe talking about is Product Merchandising. Product Merchandising is complicated and often leaves people scratching their heads in my “Top Gun” training classes. However, many people have mentioned to me that my previous post on Product Merchandising eVars helped them a lot so I am going to continue sharing information on this topic. In this post, I will delve into some more advanced concepts related to Product Merchandising. If you have not read my other Product Merchandising post, I suggest you do that before attempting to digest this one!
When it comes to Conversion Syntax Merchandising eVars, I see many clients make mistakes with allocation. As a refresher, allocation is an Admin Console setting in which you tell SiteCatalyst if the eVar should use the first value it receives or the most recent value it receives, if multiple values are present prior to a success event taking place. For traditional eVars, it is common to use “Most Recent” allocation as a way to ensure that the most recent value passed gets credit for all future success. However, Conversion Syntax Merchandising eVars are a bit different in that this allocation is set at the product level when the Merchandising eVar value is “bound” to the product at the specified binding event(s) dictated in the Admin Console. This means that the Allocation setting is not actually for the current eVar value, but rather, for the eVar value and product combination.
Since that can be confusing, let’s look at an example. Suppose that a visitor comes to your website and conducts an internal search for “books.” You have an internal search phrase Merchandising eVar so you can see which phrases lead to each product being purchased. So in this scenario, the visitor has searched for “books” and adds Product #100 to the cart. Now, if the same visitor searches for “novels” and adds a different product to the cart (say Product #200), it doesn’t really matter if you use “Original Value (First)” allocation or “Most Recent (Last)” allocation for the Conversion Syntax Merchandising eVar since there are two different products involved and allocation is tied to the binding event of products and eVar values. However, in the unique case in which the same visitor searches for “novels” and finds the same product #100 and decides to add it to the cart a second time, you have to tell SiteCatalyst which eVar value (“books” or “novels”) should be “bound” to Product #100. In this scenario (which admittedly may not happen too often), most clients have indicated that they would like to attribute success to the first search term for product #100 vs. the second search term that led to the same product, since it was the original way they discovered the product. The allocation setting you make (Original or Most Recent), will determine which eVar value gets credit for if the same product is used more than once (product #100 in this example). Therefore, most people decide to use “Original Value (First)” as the allocation method for Conversion Syntax Merchandising eVars.
The next tricky thing about Conversion Syntax Merchandising eVars has to do with non-Order/Revenue success events. As you would expect, since it is their primary purpose, Conversion Syntax Merchandising eVars do a great job of making sure that each product has its own eVar value when it comes time for the purchase event such that each eVar value is correctly associated with the right product. However, there are cases in which you will want to use eVars for more than just the purchase event (Orders, Revenue, Units). For example, if you think back to the preceding example of internal search, besides storing the internal search phrases to associate with products upon purchase, you may also want to see something more basic, like how many internal searches took place for each search phrase. In that case, you would set a success event each time an internal search takes place, and you would already be setting the Conversion Syntax Merchandising eVar with the search phrase (i.e. “books”). Naturally, you would expect that if you add the internal searches success event to the internal search phrase Merchandising eVar report, you would see the number of searches taking place by phrase. Unfortunately, you would be wrong. What you may not know, is that Conversion Syntax Merchandising eVars only associate values with success events when the Products Variable is set or when binding has already occurred. Of course, you can set the Merchandising eVar anytime you want, and it will store a value, but it will not associate that value with success events unless a product value is passed to the Products Variable. I believe the reasoning here was that Merchandising was meant for products, so the two go hand-in-hand.
This is best illustrated via an example. Let’s continue with our internal search example, only this time, in addition to seeing how many times each internal search phrase leads to orders & revenue, you want to have a custom internal searches success event and be able to break it down by internal search phrase. The way most companies attempt to accomplish this is by using a success event and eVar code like this:
However, doing this will yield some undesirable results. Here is what a report of this eVar might look like in SiteCatalyst:
You will notice an abnormally high “None” percent in this report, which represents cases in which there was no association between the eVar value and the Internal Searches success event. Since it should be impossible to have an internal search event with no internal search phrase, you would expect to have no values in the “None” row for the internal searches success event (since most companies will still populate a value of [blank search] or something similar if users search with no phrase). The “None” value for Orders is fine, since that represents cases in which no search phrase was used prior to the order.
To rectify this, you would add the “fake” product to your code so it looks like this:
Setting this “fake” product allows SiteCatalyst to set the Conversion Syntax Merchandising eVar value at the same time that event 10 (Internal Searches) is fired, so you can see one internal search for “books,” while still keeping the Merchandising eVar5 value ready to bind to a “real” product at the time of your selected binding events (normally Cart Addition and Product View). Using this code results in a more accurate report when viewed with the custom success event, which in this case is the internal searches success event:
You may also notice that the “fake” product used is a value and then a number. You can make the “fake” product any value you’d like, but most people tend to label it in a way that indicates what event was taking place. In this case, I named it “intsearch1″ since the “fake” product had to do with internal search. If the “fake” product had been done as a result of an internal campaign eVar, I might have named it “intcampaign1″ instead. However, it is important to note that you need to increment the “fake” product value (i.e. intsearch2, intsearch3, etc…) so that the same value is not used more than once by the same visitor. Using the same “fake” product value for all cases (every search term in this example) would negate the power of Merchandising, which is designed to attribute different values to different products. The only exception to this is a scenario in which the visitor intentionally uses the same value (i.e. searches on the same search keyword in this scenario), and in that case you would want to re-use the same “fake” product value whether the duplicate value happened sequentially or after another “fake” value has been passed. It is also important to remember to add the success event that you want to use this eVar with to the list of “Binding Events” in the Administration Console. In this case, you would add the Internal Search success event to the previous list of Binding Events (i.e. Cart Addition and Product View).
Note that this “fake” product workaround only has to be used when all of the following conditions are true:
- You are using a Conversion Syntax Merchandising eVar
- You want to see that Merchandising eVar’s value associated with a success event other than Orders, Revenue, Units
- You are not setting the Products variable with a value at the time the success event is being set (this is why none of this applies to Product Syntax Merchandising eVars)
This means that you only really need to worry about this in cases where you want the Conversion Syntax eVar to do double-duty. I have found that the following situations are the main times I need this work-around:
- Internal Search Phrase eVar and Internal Searches success event
- Navigation Element Clicked eVar and Navigation Link Clicks success event
- Internal Campaign eVar and Internal Campaign Clicks success event
- Product Filter Element eVar and Product Filter Clicks
As I mentioned at the outset, Product Merchandising is a bit tricky and the detailed items here around Conversion Syntax can be even trickier. I have learned that there are some things that you just have to memorize when it comes to Adobe SiteCatalyst and this post covers a few of them.
P.S. If you want to learn more about this and other SiteCatalyst tips and tricks, please join me for my Adobe SiteCatalyst “Top Gun” class in Atlanta this September as part of our ACCELERATE conference.
Lately, Adobe has been sneaking in some cool new features into the SiteCatalyst product and doing it without much fanfare. While I am sure these are buried somewhere in release notes, I thought I’d call out two of them that I really like, so you know that they are there.
Search Within Add Metrics Dialog Window
You can now use a search filter within the Add Metrics window to easily find the metrics you want to add to a conversion or traffic report. Simply enter the search area and begin typing:
Weekdays & Weekends in Metric Reports
A few years ago, Adobe added the ability to filter metric reports by Mondays, Tuesdays, etc. This allowed you to look at the same day (i.e. Monday) over the last few months to see how a metric changed on each subsequent day of the week. However, one gap that remained was the ability to filter by weekdays or weekends. I am pleased to report that Adobe has now added these as valid filters in metric reports as shown here:
I am guessing that there are a few more unknown new features so if you spot one, please leave a comment here so we can all enjoy! Thanks!
One of my newest clients is in a highly competitive business in which they sell similar products as other retailers. These days, many online retailers have a hunch that they are being “Amazon-ed,” which they define as visitors finding products on their website and then going to see if they can get it cheaper/faster on Amazon.com. This client was attempting to use time spent on page as a way to tell if/when visitors were leaving their site to go price shopping. Unfortunately, I am not a huge fan of time spent on page, since a page could have wide varieties of time spent on page due to many other reasons other than price shopping (i.e. working, going to the bathroom, yelling at kids-in my case, etc.). Because of this, I wanted to come up with an alternative way to see if price was a potential reason for lost business. However, before I share my idea, I want to add a disclaimer that there is no [legal] way to really know if people are leaving your site to buy something elsewhere due to price, but the technique I will show may shed some light on how pricing impacts your conversion rates.
Competitor Pricing – Step 1
The first part of my competitive pricing solution requires that for some or all of your products (SKU’s), you have detailed competitor pricing. Many of my clients have teams that are constantly monitoring competitive websites and documenting the current prices for some or all of their products. If your organization doesn’t have this, my solution will not work (so you can stop reading now!). If you do have this information, you will need to create a spreadsheet that has your product ID’s (values passed to the Products Variable) and your competitors’ price in the next column. If you have multiple competitors, you can add a new column for each one:
Next, you will have to talk with your Adobe Account Manager to create a new DB Vista Rule. As a refresher, a DB Vista Rule allows you to populate SiteCatalyst variables with values from a database lookup table stored on Adobe’s secure servers. This will allow you to pass in the competitor price for each product viewed and added to cart on your website via a server-side lookup. The Adobe Engineering Services team can walk you through how to upload the competitor prices to DB Vista and how to updated it over time. Keep in mind that you will need to have a process in place that updates competitors’ prices as they change, preferably within the hour so your data is accurate. This is often done by FTP’ing changes on an hourly basis. Creating a DB Vista Rule will cost you a one-time fee of a few thousand dollars, but that you can maintain it yourself thereafter. If you want to save some money, you can ask your internal developers if they can ping a similar competitor cost table in real-time as visitors are on your site, but in my experience, the work effort around that is much more than the cost of the DB Vista Rule.
Competitor Pricing – Step 2
Once you have a way to send competitor prices (by Product ID) into SiteCatalyst, where should it go? What I propose is that you pass the Product ID, your price and your competitors’ price, concatenated in a string to a new Conversion Variable (eVar). Since your visitors may view multiple products, you will also want to make this a Merchandising eVar using Product Syntax. I recommend that the data be passed when visitors view the product detail page or add a product to the shopping cart. For example, if a visitor views SKU # 10010100 and your price is $30.00 and your competitors’ price is $29.50, you would pass this:
In this case, the product ID is available on the page, as is your current price. The only data point you don’t have is your competitors’ price, which can be added to the string via the DB Vista Rule. This allows you to capture all of the key elements needed to do analysis. For example, if you add the Product Views success event to this new eVar report and filter for the above product ID, you will see all of the different pricing permutations between you and your competitor for the selected date range:
Next, you can add Cart Additions or Orders to the report to see how often each product converted with the given pricing spread:
In this fictitious example, you can see that Orders per Product View was up significantly when pricing was the same or better than the competitor for the product in question.
But there is even more information you can glean when we apply SAINT Classifications. For example, you can classify the product with just the pricing range difference to boil this data down to a finite number of rows in a way that is a tad easier to interpret:
Taking this concept one step further, you can apply another SAINT Classification that takes the Product ID out of the equation to see how the pricing spread impacts all products:
For those that really need things spelled out for them, you can use SAINT to create the highest level view of your pricing by boiling the data down to cases where you were higher, lower or the same with respect to pricing:
Obviously, the last few reports can still be viewed by Product by simply using the Products variable breakdown, but I think they show a good high-level view of pricing impact. Keep in mind that each of these rows can be trended over time in SiteCatalyst or ReportBuilder to see a long-term effect.
For those of you who like to kick things up a notch, you can also use the same DB Vista Rule to incorporate your product margin to the new eVar. If you upload your product costs to the DB Vista table, you can have the rule calculate the difference between your price and your cost and add the result as another parameter to the eVar. Then, via SAINT Classifications, you can split this out and see cases where your price is higher than your competitor broken down by your margin:
In this case, the product in question has a cost of $26.00 so the difference is passed as the last parameter to the eVar so we can include it in our analysis. This allows us to create new SAINT Classification where we can see Orders/Product View (or Cart Addition) for all products by the product margin amount:
Since all SAINT Classifications can be broken down by each other, this also allows us to see our conversion rates by price difference broken down by product margin amount:
Keep in mind that all SAINT Classifications are eligible for use in Segmentation, which means that you can now build a segment using pricing differential to competitors and product margin as criteria when doing web analysis! Also, if you want to learn how to add product costs as a new metric with which you can calculate product margin as a KPI, check out my old blog post from 2008 on how to do that.
As I stated early on, there is no way to make a direct connection between people looking at your site and then price shopping on another site, but my theory is that if you consistently under-perform when you are priced higher than your known competitor(s), this approach may give you some data to validate your theories. Obviously, there are other factors such as shipping, taxes, etc. that can have a major factor, but some of those can be included in this solution as well by simply adding additional parameters to the eVar shown above. Other ways to do similar competitive analysis include using Voice of Customer surveys to ask your visitors if they are price shopping, or moving all SiteCatalyst and competitive data into Adobe’s Data Workbench product. Either way, if you like the concept, you can give it a try or contact me if you want some assistance. If you have other ways to do this, feel free to leave a comment here. Thanks!
In working with a client recently, an interesting question arose around cart additions. This client wanted to know the order in which visitors were adding products to the shopping cart. Which products tended to be added first, second third, etc.? They also wanted to know which products were added after a specific product was added to the cart (i.e. if a visitor adds product A, what is the next product they tend to add?). Finally, they wondered which cart add product combinations most often lead to orders.
I had to admit that I was surprised that no one had asked me these questions in the past (a rarity for an old-timer like me!). However, I love getting new questions since it allows me to come up with cool ways to answer them. Therefore, in this post, I will share some of the ideas that I am proposing to this client in case your organization has similar questions.
Product Cart Order Sequence
To tackle the question of which products are added to the cart first, second, third, my first instinct was to try out the cool new sequential segmentation in Adobe Reports & Analytics (SiteCatalyst). This feature has been around in Ad Hoc Analysis (Discover) for a while, but is new to Adobe Reports and Analytics. However, the more I thought about this, the more I realized that sequential segmentation wouldn’t help very much. The only scenario in which I think it might help, is if you want to know exactly how often Product A was followed by Product B and then Product C and an order took place thereafter. If you know the sequence you are looking for, you can isolate it and look at any report (i.e. Visits, Orders) using sequential segmentation.
But my client is looking to do more exploration and find out which products are added first, second, third, etc. Therefore, my thoughts turned to my old friend Pathing. Pathing is a great way to see a sequence of anything happening on a website/app. In this case, the sequence I am looking to see is products added to cart. Therefore, a cool way to answer this question would be to create a new Traffic Variable (sProp) and pass the Product ID’s (or Names) of each product added to the shopping cart to the variable when a Cart Addition takes place. Once this is done, you can enable Pathing on this new “Products Added to Cart” sProp so you can see all of the available pathing reports. For example, you can open the Full Paths report to see the most popular product combinations added to the shopping cart. Obviously, the first batch of entries in this report will be cases with just one product added:
However, when you get deeper into the results, you will start to see multi-product combinations:
Of course, you can narrow these paths to a specific product in this report using the “Showing Paths containing” feature:
Or you could also use the next page flow report to see products added after a specific product (in this case an Exit means that no other products were added to the cart in the same visit):
Or you could see similar information using Pathfinder:
As you can see, by simply passing product ID’s (or names) to a new sProp, you can gain insight into which products are added the most and in which combinations.
If you have a Product Category SAINT Classifications for your Products variable, you can also see all of the above sports by Product Category in Discover (Ad Hoc Analysis) by using pathing on classifications. Or you could always pass in the Product Category to another sProp if it is known at the time as suggested in the comments by Jan Exner.
But What About Orders?
While the preceding concept may be interesting, it falls short of the original goal because it doesn’t show which of these cart addition sequences leads to orders. While you could segment on visits with an order and then look at the remaining paths, I prefer to visualize the actual paths and see exactly when the order took place. Therefore, to add this component, I suggest that you pass the phrase “order” to the same new traffic variable on the order confirmation page. By including this one new value, it will be included in the pathing reports and can be used in any of the reports above or the fall-out report. You can also use the previous page flow report beginning with the “order” value to see the most common cart addition product sequences (paths) that lead to success:
This is probably best done in Ad Hoc Analysis (Discover) where you can have unlimited branches in the report, but you can still extract value from this in Adobe Reports & Analytics.
Other Pathing Reports
While I haven’t had much time to play with this concept, I would imagine that you could also extract some useful information from the additional pathing reports that are enabled when you turn on pathing for this new “Products Added to Cart” sProp. For example, if you want the “411″ on a particular product being added to the cart, you can open the Summary report:
You could also see how often each product was the only product added to the cart or abandoned in the cart by using an Exit Rate formula (Exits/Visits). Keep in mind that if a visitor adds another product to the cart, the product in question will no longer be an “exit” as far as this report is concerned, so the exit rate below is the combination of single carts + abandons per visit:
You may even be able to use the “Page Depth” (even though they really aren’t pages!) to see how often a particular product was the first one added to cart, second, etc… I say may, because this is what I think this report is showing, but I need Ben Gaines to verify this for me!
Lastly, if you care about Cart Removals (which is not something I normally care about since many people simply exit instead of removing products), you could also include them in this approach. To do this, you’d have to change the values you pass to the sProp to be “Add:[Product ID or Name]” and then use “Remove:[Product ID or Name]” instead of just passing in the product ID or name.
As those of you who have read my posts in the past know, sometimes, I come up with crazy ideas like this and they work out, but other times they don’t. If you think this concept is interesting, feel free to give it a try, but keep in mind that this is just a concept for now until I get some clients to do more experimentation…Enjoy!
Happy summer (almost)! After getting many requests over the years, I am finally bringing my one-day intensive Adobe SiteCatalyst (Analytics) “Top Gun” Training class to New York City. I will be hosting the class on Friday, July 25th. This has been made possible by our friends at eClerx who have graciously provided a conference room at their New York office. For those unfamiliar with my class, the goal is to teach the inner workings of SiteCatalyst in a way that helps you know how it can be applied to daily business questions. While the class doesn’t get into code, it is pretty deep into the features and functions of the SiteCatalyst product. It does not cover daily use of the interface (since that is pretty easy), but rather, is geared towards those who “own” SiteCatalyst within the enterprise or developers who want to understand the differences between an eVar and an sProp. The morning of the class closely mirrors the first section of my SiteCatalyst Handbook and the afternoon session resembles the third section of my book.
Unfortunately, the room can only hold 16 people so if you are interested, I suggest you sign-up sooner, rather than later as there is a good chance the seats will be gone relatively quickly based upon past cities in which I have conducted the same class. To learn more about the class, see pricing information and to register, please use this link: http://bit.ly/top-gun-nyc.
If New York isn’t in the cards for you, don’t forget that two months after this class, I will be conducting the same class in Atlanta as part of our ACCELERATE conference.
In the recent white paper I wrote in partnership with Adobe, I discuss ways to re-energize your web analytics implementation. Often times, this involves re-assessing your business requirements and rolling out a more updated web analytics implementation. However, if you decide to make changes to your implementation in a tool like Adobe Analytics (SiteCatalyst), at some point you will have to make a decision as to whether you should pass new data into the existing report suite or begin fresh with a new report suite. This can be a tough decision, and I thought I would use this blog post to share some things to consider to help you make the best choice for your organization.
Advantages of Using The Existing Report Suite
To begin, let’s look at the benefits of using the same report suite when you re-implement. The main one that comes to mind is the ability to see historical trends of your data. In web analytics, this is important, since seeing a trend of Visits or Orders gives you a better context from which to analyze your data. In SiteCatalyst, you get the added benefit of seeing monthly and yearly trend lines in reports to show you month over month and year over year activity. Obviously, if you decide to start fresh with a new report suite, your users will only see data from the date you re-implement in the SiteCatalyst interface.
Another benefit of continuing with your existing report suite is that you will retain unique visitors for those that have visited your site in the past and have not deleted their cookies. When you begin with a new report suite, all visitors will be new unique visitors so you will be starting your unique visitor counts over from the day you re-implement. Starting with a new report suite will also result in some recency reports(i.e. Visit Number, Returning Visitors and Customer Loyalty) being negatively impacted. Additionally, using an existing report suite allows you to retain any values currently persisting in Conversion Variables (eVars). Often times you have eVar values that are meant to persist until a KPI takes place or until a specific timeframe occurs. If you create a new report suite, all eVars will start over since they are tied to the SiteCatalyst cookie ID.
Another area to consider is Segmentation. It is common to use a Visitor container within a SiteCatalyst segment to look for visitors who have performed an action at some point in the past. This segment will rely on the cookie ID so if you begin with a new report suite, you will lose visitors in your desired segment. For example, let’s say you have a segment that looks for visitors who have come from an e-mail at some point in the past and ordered in today’s visit. If you create a new report suite, you will lose all data from people who may have come from an e-mail prior to the new report suite being created.
If your end-users have dashboards, bookmarks and alerts setup, using the existing report suite will avoid the need to re-create them in the new report suite for variables that remain unchanged. Depending upon how active your users are, this can have a significant impact, as re-creating these can result in a lot of re-work.
There are many other items to consider, but these are the ones that I have seen come up most often as advantages of keeping the existing report suite when re-implementing.
Advantages of Using A New Report Suite
So now that I have scared you off of using a new report suite when re-implementing, let me take the counter-arguement. Despite all of the advantages listed above, there are many cases in which I recommend starting with a brand new report suite. The most obvious is when the current implementation is proven to be grossly incorrect or misaligned. I often encounter situations in which the current implementation hasn’t been updated for years and not at all related to what is currently on the website (or mobile app). If what you have doesn’t answer the relevant business questions, all of the advantages listed above become obsolete. In this situation, seeing historical trends of irrelevant data points, losing eVar values or report bookmarks isn’t a big deal. You may still lose out your historical unique visitor counts since that is out-of-the-box functionality, but I don’t think this justifies not starting with a clean slate. If you are not sure if your current implementation is aligned with your latest business goals, I highly recommend that you perform an implementation audit. This will help you understand how good or bad your implementation is, which is a key component of making the new vs. existing report suite decision.
The next situation is one in which the current implementation is using many of the allotted SiteCatalyst variables, but the new implementation has so much data to collect that it has to re-use the same variables going forward. This gets messy since it is easy to re-name existing variables, but you cannot remove historical data from them. Therefore, if you convert event 1 from “Internal Searches” to “Leads,” because you no longer have a search function and are out of success events, you can get into trouble when your end-users view a trend of leads for this month and see that they are a fraction of what they were last year! Your users may not understand that the data they are seeing from last year is “Internal Searches” and not “Leads,” and may sound off alarms indicating that the website is broken and conversion has fallen off the cliff! While you can do your best to annotate SiteCatalyst reports and educate people, the re-use of existing variables is always a risk, whereas using a new report suite does not require the re-use of existing variables and can avoid this confusion. Where possible, I suggest that you use previously unused variables for your new implementation so this historical data issue doesn’t affect you. Obviously, this requires that your existing implementation isn’t using most or all of your available SiteCatalyst variables. Hence, one key factor when deciding whether to use an existing report suite or create a new one is counting the number of incremental variables you will need variable slots for and determining whether you have enough to avoid having to re-use old variables for new data. If you have enough, that may tip the scale to re-use, but if you don’t, it may make you lean towards a new report suite.
When it comes to historical trends, one thing to keep in mind is that even if you choose to create a new report suite, it is still possible to see historical trends for data that the new and old report suites have in common. This can be done by importing data into the new suite using Data Sources. This is most effective when the data you are uploading are success events (numbers) and a bit more difficult for eVar and sProp data. The main benefit of this approach is that it allows your SiteCatalyst users to see the data from within the SiteCatalyst interface. Another option is to use Adobe ReportBuilder. Within Excel, you can build a data block for the data in the old report suite and then another data block for the same data in the new report suite and then merge the two together in a graph using two data ranges. Doing this allows you to create charts and graphs that span the old and the new, but these are only available in Excel and not in the SiteCatalyst interface.
Another justification for starting with a new report suite is that your current suite has data that is untrustworthy. I often talk to companies who say that they simply do not trust that the data in SiteCatalyst is correct. As I mention in the white paper, trust is an easy thing to lose and a hard thing to earn back. Your SiteCatalyst reports can be correct nine times out of ten, but people will focus on the one time it was wrong. When this happens too often, it may be time to start with a new report suite and make sure that anything added to this new suite is validated and trusted. This can help you create a new perception and help you re-build the trust that is so essential to web analytics.
As you can see, there are many things to consider when it comes to re-implementation and report suites. The current state of your implementation and its data will be the biggest decision points, but every situation is different. Hopefully this helps provide a framework for making the decision and allows you to weigh the pros and cons of each approach.
Those of you who have read my blog posts (and book) over the years, know that I have lots of opinions when it comes to web analytics, web analytics implementations and especially those using Adobe Analytics. Whenever possible, I try to impart lessons I have learned during my web analytics career so you can improve things at your organization. However, much of what I have written in the past has been product-related, covering features, functions and implementation tips. Obviously, there is much more than that involved when it comes to success in web analytics.
As some of you may know, the last role I held when I worked at Omniture (prior to Adobe acquisition) was one in which I was tasked with “saving” accounts that had gone astray. I encountered many accounts that had either a dysfunctional web analytics program or implementation. One way or another, they were not getting the desired value from their investment in SiteCatalyst. In my time serving this role, I came to see many common characteristics of those who were having problems and identified specific ways to address them to get clients back on track. After I left Omniture, I joined Salesforce.com as the head of web analytics. In that role, I encountered similar issues, as the Salesforce.com implementation and program had many of the same problems I had seen while at Omniture. Over the next few years, I had the opportunity to test out my “client-saving” techniques in a real life setting and had some great success in turning around the web analytics program at Salesforce.com.
While at Web Analytics Demystified for the past three years, I have continued my mission to help ailing web analytics programs and had the good fortune to work with some great clients. These clients have entrusted me to show them how to bring their web analytics programs back from the abyss or to improve good things they are already doing. Working with the great partners at Web Analytics Demystified, I have been able to learn and improve upon things I have done in the past. Last year at the Chicago eMetrics conference, I documented my lessons learned into a forty-five minute presentation entitled “Bringing your Web Analytics Program Back from the Dead!” I was a bit worried that no one would actually show up to my session, since coming was an implicit admission that things weren’t going so well. But to my surprise, there was standing room only! Jim Sterne informed me that I had about 95% of all attendees in my breakout session! I was excited to share my experiences and afterwards, received a great response from the crowd, as well as a rush of people attacking me at the stage with follow-up questions. Apparently, I had hit some sort of nerve with the topic (Note: This summer I will be presenting a follow-up session at Chicago eMetrics on the topic)!
Since then, I wondered how I could share this information with more folks who may be interested in improving or re-energizing their web analytics programs and/or implementations. I considered writing a book on the topic, but having recently written a book, I knew that this was a massive undertaking and that my busy schedule wouldn’t allow it. Instead, I decided to partner with my old friends at Adobe to create a new white paper on the topic. In this white paper, I have tried to get down to the core tenants of my approach to reenergizing web analytics programs and synthesized it to under twenty pages of content. While most of the concepts in the paper were learned working with Adobe clients, I believe that the principles will apply to any web analytics technology or program. In fact, I believe that the white paper would also apply to non-web analytics programs, as much of it goes back years to by time working at Arthur Andersen in the nineties.
Therefore, without any more preamble, I am pleased to announce the immediate availability of this new Adobe-sponsored white-paper entitled “Reenergizing Your Web Analytics Program.” I hope that you will take the time to read it and take advantage of some of the lessons and techniques I have learned over the past 10+ years so that you and your organization can improve your program/implementation. As a young industry, I think it is the responsibility of us “old-timers” to pass on what we have learned so others don’t have to “reinvent the wheel.”
Click here to download white paper
A big thanks goes out to my friends at Adobe for sponsoring this white paper and making it happen. Enjoy!
I recently had a client pose an interesting question related to their shopping cart. They wanted to know the distribution of money its visitors were bringing with them to each step of the shopping cart funnel. For example, what percent of visitors have between $25 and $50 in their cart when they reach the “Billing” step of the conversion funnel? Does this percentage remain constant throughout the funnel or are there significant drop-offs? Unfortunately, this is not something that can be easily derived in SiteCatalyst, but with a bit of creativity, I will show you how you can add this data to your implementation.
Calculating Current Order Value
The first step in this process is to work with your developers to create a new Counter eVar that will hold the current order value. As soon as a visitor adds an item to the cart, pass the dollar amount associated with that cart addition to the Counter eVar (in addition to passing it to a currency event as prescribed in my “Money Left On Table” blog post). This value will be bound to the Cart Addition success event and future cart events unless it is modified. If the visitor adds more products to the cart, pass in those amounts and if the visitor removes an item from the cart, subtract it from the Counter eVar value (remember you pass values to Counter eVars using the “+” or “-” sign). I would expire the Counter eVar at the Purchase or Visit (if your site doesn’t have a persistent cart).
By having these values in the Counter eVar, you will end up with many different dollar amounts when you open the eVar report with one of your cart events. Here is an example of what the eVar report might look like:
Obviously, this report is not that readable, so the next step is to classify it into meaningful groupings, such as Under $20, $21-$35, $36-$50, etc… This will allow you to analyze the data in buckets and look for insights. Which groupings you choose are up to you and you can use SAINT to have multiple groups, such as every five dollars, every ten dollars, etc… Here is what it might look like after the SAINT Classification:
This general concept is similar to one that I described in my Revenue Bands post, but in that scenario, we were just passing the final order amount to a regular text eVar. The difference here is that we are using the Counter eVar to adjust the order value up or down as it progresses through the cart process.
Once we have the current order values tied to each stage of the cart funnel and have grouped them accordingly using SAINT, our next challenge is to compare the distributions. There are a few different comparisons you can make with this data, so I will touch upon each of them. The first one you might want to see is whether the various percent distributions are steady or going up/down over time. In this case, you may not care about the actual raw numbers that are associated with each order value range, but rather, are most likely more interested in the percent of the total. For example, it may not be that interesting that 2,500 checkouts fell into the range of $15-$25, but it may be interesting to know that this dollar range represented 15% of all visits to the checkout step of the funnel. If you could see this percentage, then you could trend it over time and see if that $15-$25 bucket is increasing, decreasing or steady over time.
To see these percentages, you have two options, the first is to download data to Excel and create formulas to calculate the percent and trend it over time. If you want to use the SiteCatalyst interface, the best way to do this is to employ the “Total Metrics” feature. This feature allows you to create a calculated metric that divides the row value by the total at the bottom of the report. For example, if you wanted to calculate the percent of each dollar band while at the Checkout step, you would divide Checkouts by Total Checkouts using a formula like the one shown here:
This formula moves the percent shown in the regular eVar report front and center so it is the actual metric of the report. To visualize this better, let’s look at the previously shown report with this new metric column added:
As you can see, the percentages that were previously on the right side of the column (more as an FYI), are now present by themselves as a real metric in SiteCatalyst. Now you can use this percentage as a true metric, meaning that you can trend it over time and see its historical performance:
This allows you to see how each dollar amount band does and do some hard-core web analysis!
Another analysis you may want to do with this data is to see the drop-off between the dollars amount percentages added to cart, the percentages making it to checkout, etc… This is a bit more complex because you are looking at one dollar amount grouping, but seeing how it changes as visitors get further in the cart process. Unfortunately, there is no great SiteCatalyst report for comparing different percentages over time, so this analysis will have to be done in Excel.
To begin, you will want to create additional “Total” metrics like the one shown above for the other cart steps that you care about. In SiteCatalyst, this is what a report might look like, though it is limited in its use. In this case, the client has a customization step in the funnel, a billing page step and then a checkout step. Using the “Total” metrics, you can compare the changes in dollar amounts at the various steps of the funnel:
In this case, we are looking to see how consistent the percentages are across each row and seeing if we can identify any problem areas. However, to do analysis on this, Excel might be a better tool since it is easier to compare the percentages between different columns. Also keep in mind that you can break this report down by Product or Product Category to see how these percentages change by Product.
If your website has discrete steps in its funnel and if you are curious to see how much money visitors have at each step of the cart, the preceding is one way to do this. In addition to what I have shown here, having this information can be useful in other ways. For example, if you want to build a segment of all cases in which a visitor had more than $100 at the checkout step, but did not purchase, the eVar described here can be used as part of your segment criteria. I am sure there are many other ways to use this data as well, but hopefully this gives you some food for thought.
If your web analytics work covers websites or apps that span different countries, there are some important aspects of Adobe SiteCatalyst (Analytics) that you must know. In this post, I will share some of the things I have learned over the years related to currencies and exchange rates in SiteCatalyst.
When you work for a multi-national organization, the first decision you have to make is whether you plan to have a different report suite for each country website or whether you will combine all data into one report suite and use segmentation for day-to-day analysis. For the pros and cons of this decision, I suggest you refer to this old post that covers multi-suite tagging vs. segmentation. As noted in that post, one of the downsides of using one report suite and segmentation is that you cannot have a different currency for each country. I find this very limiting, so let’s assume that you have a different report suite for each country site in your organization. When implementing each report suite, you will assign a currency that the report suite will use. For example, if the report suite is for Japan, in the Administration Console, you will make the currency Japanese Yen:
Once you do this, you just need to make sure that when you pass Revenue and currency success events that you set the s.currencyCode variable to the appropriate currency code for that country (i.e. JPY). This will tell SiteCatalyst that the numbers you are passing should be stored as Japanese Yen. If you are using multi-suite tagging and sending a second copy of data to a global report suite, then Revenue and currency success events will be translated into the currency of the global report suite (i.e. US Dollars) using the currency exchange rates found on xe.com. This allows your users in one country to see data in their own local currency, while letting executives see data rolled-up in a master suite in one unified currency.
One-Report Suite Only
As mentioned above, if you don’t have a separate report suite for each country site, either having just one report suite for the entire organization or a report suite for a region that contains multiple currencies, you cannot take advantage of the preceding currency translation feature. In this case, you have two choices. Your first choice is to use the same currency for all countries and pass data in that currency at the time of data collection. For example, if you have a European report suite, you may choose to use Euro as the primary currency and translate British Pounds and other non-Euro currencies into Euros at the time data is passed into SiteCatalyst. The second option is to pass currency amounts into a Numeric Success Event in a way that is currency agnostic. In this approach, you would not use the out-of-box Revenue event and instead would create a custom Numeric success event and pass in the raw numbers in the currency of that country. For example, if a 200 Euro order takes place in Germany, you would pass in a value of 200 and if a 300 British Pound order takes place, you would pass in a value of 300 to the Numeric success event. At the same time, you should pass in the currency the order took place in to an eVar. Once you have the raw transaction amount and the currency type, you can download the data to Excel using Adobe ReportBuilder and translate the raw Numeric success event numbers into the appropriate currency using a lookup table and referencing the eVar that indicates the currency. While this will not provide a way to see local currencies within the native SiteCatalyst interface, you can at least have your Excel dashboards show local currencies. Obviously you can use both of these approaches concurrently, using a master currency for the region and then providing local currencies in an Excel dashboard.
Pegged Exchange Rates
Over the years, I have worked with several clients that use “pegged” exchange rates. In this scenario, their organization uses one set of currency exchange rates for the entire fiscal year instead of using the daily exchange rates. This causes a problem for Adobe SiteCatalyst, since its default behavior is to use the daily exchange rates found on xe.com. Keep in mind that the local currencies in country-specific report suites will be fine since they are not being translated into a master currency. In this scenario, the only figure that is negatively affected is the currency amount in your global report suite, since that is when currency translation occurs. For example, if you collect an order for 300 Euro in Germany and the German report suite is set to Euros, everything will be fine. However, when that 300 Euro order is sent to the global report suite (let’s assume it is a US-based organization), it will be translated into US Dollars by default using today’s exchange rate instead of your pegged exchange rate (which can be quite different).
Unfortunately, there isn’t a way to override this default behavior, so I recommend using a DB VISTA rule to have SiteCatalyst lookup the pegged exchange rates published by your organization. As currency data is collected, you can use DB VISTA to bypass or overwrite the exchange rate translation done by SiteCatalyst with the rates approved by your organization. Unfortunately, DB VISTA rules cost a few thousand dollars, but in this case, it is probably worth it to have your global currency figures reflected correctly.
Interface Currency Setting
The last area related to currencies I want to cover is the currency setting found within the SiteCatalyst interface itself. I call this out because it can be very dangerous if you do not understand it. In the Report Settings area of the left navigation, there is a way to change the currency that you see when using SiteCatalyst. Here is what it looks like:
From this screen you can change the currency setting you use. Here is an example of me changing it from US Dollars to Euros:
Doing this will now show currency reports in Euros:
The dangerous part of this feature is that it seems like it does more than it actually does. How awesome is it that we instantaneously converted all of our data from US Dollars to Euros? Unfortunately, this is a mirage. Using this feature simply translates all historical data into the new currency (Euros in this case) using the current exchange rate. This means that historical data is not converted using the exchange rate that was present at the time the data was collected. Therefore, if the exchange rate has changed significantly, our data will be off. This is why it is important that you educate your users about this feature before they start using it and present inaccurate data to people in your organization.Once you understand how this feature works, you may re-think using this feature and proactively discourage its use!