DataViz: Crayola Crayons – How Color Has Changed

Readers:

When I was a young boy, I loved to color with my big box of Crayola Crayons. I would pull out blank sheets of paper and create multi-colored masterpieces (at least my mother said so).

eight_crayons-200x138Crayola’s crayon chronology tracks their standard box, from its humble eight color beginnings in 1903 to the present day’s 120-count lineup. According to Crayola, of the seventy-two colors from the official 1975 set – sixty-one survive. [1]

A creative dataviz type who goes by the name Velociraptor (referred from here as “Velo”) created the chart below to show the historical crayonology (I just made that word up!) of Crayola Crayons colors.

 

crayola_crayon_color_chart-520x520Velo gently scraped Wikipedia’s list of Crayola colors, corrected a few hues, and added the standard 16-count School Crayon box available in 1935.

Except for the dayglow-ski-jacket-inspired burst of neon magentas at the end of the ’80s, the official color set has remained remarkably faithful to its roots!

Ever industrious, Velo also calculated the average growth rate: 2.56% annually. For maximum understandability, he reformulated it as “Crayola’s Law,” which states:

The number of colors doubles every 28 years!

If the Law holds true, Crayola’s gonna need a bigger box, because by the year 2050, there’ll be 330 different crayons! [1]

A Second Version

Velo was not satisfied with his first version, so he produced the second version below. [2]

Crayola Color Chart

A Third Version (and interactive too!)

Click through to the interactive version for a larger view with mouseover color names!

———————————————————————————–

References:

[1] Stephen Von Worley, Color Me A Dinosaur, The History of Crayola Crayons, Charted, Data Pointed, January 15, 2010, http://www.datapointed.net/2010/01/crayola-crayon-color-chart/.

[2] Stephen Von Worley, Somewhere Over The Crayon-Bow, A Cheerier Crayola Color Chronology, Data Pointed, October 14, 2010, http://www.datapointed.net/2010/10/crayola-color-chart-rainbow-style/.

Infographic: Happy Diwali!

Readers:

INDIAN GIRL LIGHTS A DEEPAWALI LAMP IN AHMEDABADWhile called the “Festival of Lights,” Diwali is most importantly a day to become aware of one’s “inner light.” In Hindu philosophy there is an idea of “Atman,” something beyond the body and mind which is pure, infinite and eternal. Today is a celebration of “good” versus “evil”; A day when the light of higher knowledge dispels ignorance. With this awakening comes compassion and joy.

The background story and practices vary region to region. Many people celebrate by lighting fireworks and sharing sweets and candies. Diwali is a holiday celebrated across a vast array of countries and religions. It is celebrated in India, Nepal, Sri Lanka, Myanmar, Mauritius, Guyana, Trinidad & Tobago, Suriname, Malaysia, Singapore and Fiji, by Hindus, Jains, Sikhs and Buddhists.

This informative infographic is from 2012, but I like the information about Diwali it provides and thought of sharing.

Namaste!

Michael

diwali_infographic_final1

Source: Metal Gaia, Happy Diwali!, November 13, 2012, http://metal-gaia.com/2012/11/13/happy-diwali/.

Using the R Integration functionality, how to perform Text Mining on a MicroStrategy report and display the result

Readers:Jaime Perez

Here is another great post in the MicroStrategy Community from Jaime Perez (photo, right) and his team. A lot of work when into the preparation of this post and it shows some great ways to use the “R” integration with MicroStrategy.

Contributors from Jaime’s team include:

ssonobe  ssonobe

Lili 

Joanne A 

Ohingst  Ohingst

Enjoy!

Michael

Text Mining Using R Integration in MicroStrategy

Users may wish to perform text mining using R on the result of any arbitrary MicroStrategy report and display the result. One of the problems that hinders the users from achieving it is that the number of output elements is not always consistent. For example, a report may have three attributes named ‘Age groups’, ‘Reviewer’, and ‘Survey feedback’ and the report might display four rows of feedback as follows:

01.jpg

If the above report result is sent to R as an input and the R script breaks down each sentence of the feedback into the term frequency that is grouped by the age groups, it will have 18 rows.

02.jpg

Since the number of output elements is greater than the number of the MicroStrategy report rows, the report execution will fail. Using the objects in the Tutorial project, this technical note (TN207734) describes one way to display the result of text mining on a MicroStrategy report, using the R integration functionality.

PREMISE:
– Following the instructions in TN43665, the MicroStrategy R Integration Pack has already been installed on the Intelligence Server.

The Steps Involved

STEP 1: Decide on the input values that need to be sent to R via R metrics
The first step is to decide on which data you wish to perform text mining. In this technical note, the sample report will let users select one year element, the arbitrary number of category elements, and specify the Revenue amount in prompts. The report will then display the value of the normalized TF-IDF (term frequency and inverse document frequency) for every word showing up in the qualified Item attribute elements, grouped by the Category elements.

A user may select the following values for each prompt and the report may look as shown below.

  • Year: 2012
  • Category: Books, Movies, and Music
  • Revenue: greater than $15,000

03.jpg

Eventually, the user may want to see the normalized TF-IDF for every word showing up in the Item attribute elements as shown below:

04.jpg

Since the final output displays each word from the Item attribute and it is grouped by the Category elements, the necessary input values to R are as follows

  • The elements of the Category attribute.
  • The elements of the Item attribute.

 

STEP 2: Create metrics to pass the input values to R

The input values to R from MicroStrategy must be passed via metrics. Hence, on top of the current grid objects, additional metrics need to be created. For this sample report, since the inputs are the elements of two attributes, create two metrics with the following definitions so that the elements are displayed as metrics.

Max(Category@DESC) {~}

Max(Item@DESC) {~}

05.jpg

 

STEP 3: R script – Phase 1: Define input and output variables and write R script to obtain what you wish to display in a MicroStrategy report

In the R script, define (1) a variable that receives the inputs from MicroStrategy and (2) a variable that will be sent back to MicroStrategy as the output as depicted below. Since the number of output elements must match with the number of input elements, it is defined as “output = mstrInput2” to avoid the errors. In other words, this script executes R functions to obtain the data that you wish to display in a MicroStrategy report, but the output is the same as the input. More details about how to display the result in a MicroStrategy report will be followed up later in this technical note.

06.jpg

 

In this technical note, after manipulating the input value, we assume that the variable named ‘norm.TF.IDF’ in the R script holds the values of the TF-IDF for each term.

07.jpg

 

STEP 4: Create tables in the data warehouse to store the value of your R output

In order to display the values of the ‘norm.TF.IDF’ defined in a MicroStrategy report, tables to hold the result need to be created in the data warehouse. In other words, additional report will later have to be created in MicroStrategy and it will extract the data from the database tables, which are created in this section.

In this specific example, the variable ‘norm.TF.IDF’ has the elements of words (terms) and categories and the values of the normalized TF-IDF. Considering the types of data, the first two should be displayed as attributes and the values of the normalized TF-IDF should be presented in a metric. Hence, two lookup tables to hold the term and category elements and one fact table need to be created to store all the data. On top of these tables, one relationship table is also required since the relationship between words and categories is many-to-many.

 

STEP 5: R script – Phase 2: Populate the tables in your R script

As previously mentioned, the variable named ‘norm.TF.IDF’ contains the values, which a user wishes to display in a MicroStrategy report as shown below.

07.jpg

 

In this R script, four more variables are defined from ‘norm.TF.IDF’, each of which contains the subset of data that will be inserted into the database tables.

 

tm_Category holds the unique elements of the Category.

10.jpg

 

tm_Word holds the unique elements of the Word (Term).

11.jpg

 

tm_Word_Cat stores the values of the many-to-many relationship.

12.jpg

 

tm_Fact contains the values of TF-IDF for every Word-Category combination.

13.jpg

 

In the R script, populate the database tables with the above four subsets of ‘norm.TF.IDF’.

# Load RODBC
library(RODBC)

# RODBC package: assign ch the connectivity information
ch <- odbcConnect("DSN_name")

# Delete all the rows of the tables
sqlClear(ch, "tm_Category", errors = TRUE)
sqlClear(ch, "tm_Word",     errors = TRUE)
sqlClear(ch, "tm_Word_Cat", errors = TRUE)
sqlClear(ch, "tm_Fact",     errors = TRUE)

# SQL: insert the data into tables; use parameterized query
sqlSave(ch, tm_Category, tablename = "tm_Category", rownames=FALSE, append=TRUE, fast = TRUE)
sqlSave(ch, tm_Word,  tablename = "tm_Word", rownames=FALSE, append=TRUE, fast = TRUE)
sqlSave(ch, tm_Word_Cat, tablename = "tm_Word_Cat", rownames=FALSE, append=TRUE, fast = TRUE)
sqlSave(ch, tm_Fact, tablename = "tm_Fact", rownames=FALSE, append=TRUE, fast = TRUE)

#Close the channel
odbcClose(ch)

 

STEP 6: Create and add an R metric, which implements the R script

The R script is done. It is time to implement this R script from MicroStrategy by creating an R script. In the deployR interface, open the R script and define the input and output that you specify in Step 3 as follows. Since the elements of the Category and Item attributes are characters, choose “String” as its data type. Likewise, since the output is the same as the mstrInput2, its data type is also set to string.

14.jpg

 

Create a stand-alone metric and paste the metric definition of the deployR utility. Then, replace the last parameters by the Category and Item metrics that you created in Step 2.

15.jpg

 

Add the R metric to the report.

15.2.png

 

The report and R will perform the following actions after adding the R metric
i. The report lets users select the prompt answers
ii. MicroStrategy sends the Category and Item elements to R via the R metric
iii. R performs text mining to calculate the TF-IDF based on the inputs
iv. R generates subsets of the TF-IDF
v. R truncates the database tables and populates them with the subset of the TF-IDF
vi. R sends the output(which is actuary the input) to MicroStrategy
vii. The report displays the values of all object including the R metric

 

STEP 7: Create MicroStrategy objects to display the data

From the tables created in Step 4, create the Word and Category attributes and the fact named weight. The object relationship is as depicted below.

08.jpg

09.jpg

 

Now, create a new report with these objects. This report will obtain and display the data from the database tables.

16.jpg

 

STEP 8: Utilize the report level VLDB properties to manipulate the order of the report execution jobs

There are currently two reports and let each of which to be named R1 and R2 as described below

  • R1: A report which prompts users to specify the report requirements and implements the R script executing text mining
  • R2: This report obtains the result of text mining from the database and display it

 

If the two reports are placed in a document as datasets as shown below, there is one problem: R2 may start its execution before R1 populates the database tables with the result of text mining.

17.jpg

 

In order to force R2 to execute its job after the completion of R1, the VLDB properties PRE/POST statements along with additional database table may be used. The table tm_Flag contains the value of 0 or 1. R2 is triggered when R1 sets the value of completeFlag to 1. The detailed steps are described below with the script for SQL Server.

 

i. Create another table in the database, which holds the value of 1 or 0

CREATE TABLE tm_Flag
(
   completeFlag int
)


INSERT INTO tm_Flag VALUES(0)

 

ii. In the VLDB property ‘Report Post Statement 1” of the R1 report, defines a Transact-SQL statement that changes the value of completeFlag to the value of 1.

DECLARE @query as nvarchar(100)
SET @query = 'UPDATE tm_Flag SET completeFlag = 1'
EXEC sp_executesql @query

 

iii. Define the VLDB property ‘Report Pre Statement 1’ in R2 so that it will check the value of completeFlag every second and loop until it turns to 1. After the loop, it will revert the value of completeFlag back to 0. After this Report Pre Statement, R2 will obtain data from the database, which has been populated by R1.

DECLARE @intFlag INT
SET @intFlag = (select max(completeFlag) from tm_Flag)

WHILE(@intFlag = 0)
BEGIN
	WAITFOR DELAY '00:00:01'
	SET @intFlag = (select max(completeFlag) from tm_Flag)
END

DECLARE @query as nvarchar(100)
SET @query = 'UPDATE tm_Flag SET completeFlag = 0'
EXEC sp_executesql @query

 

Activity Diagram

18_revised.png

 

 

Overall execution flow

  1. Answer prompts

19.png

 

2. Only the text mining result is displayed to users

20.png

 

Third Party Software Installation:

WARNING: The third-party product(s) discussed in this technical note is manufactured by vendors independent of MicroStrategy. MicroStrategy makes no warranty, express, implied or otherwise, regarding this product, including its performance or reliability.

 

Kimball Group Retiring on December 31, 2015

Readers:

I received the following e-mail from the Kimball Group. Thought I would share.

Best regards,

Michael

Kimball Group Retiring on December 31, 2015

Kimball GroupDuring the past three decades, we have worked with hundreds of clients, written thousands of pages, taught tens of thousands of students, and flown millions of miles. It’s been incredibly rewarding and challenging, but it will soon be time to move on. The members of the Kimball Group will retire at the end of December 2015.

We wanted to give you plenty of notice while there’s still time to engage us or enroll in our classes (or both).

  • Kimball University Public Classes: Several Dimensional Modeling and DW/BI Lifecycle classes are scheduled for the remainder of this year. We’ll announce our 2015 “final tour” in mid-December.
  • Kimball University Private Onsite Classes: Check out our onsite classes and contact Margy if you have questions.
  • Kimball Group Consulting: Check out our consulting offerings and contact Bob if you have questions.

Stay tuned for more details during the next several months.

We have learned a tremendous amount from our clients, students and readers through the years and are extremely grateful for your business, intelligence, wit and kindness. We hope to see as many of you as we can during the coming year as we approach retirement.

Thanks and best regards,

Ralph, Julie, Margy, Bob, Joy and Nancy

Bryan Brandow: Triggering Cubes & Extracts using Tableau or MicroStrategy

trigger-720x340

bryan-headshots-004Bryan Brandow (photo, right), a Data Engineering Manager for a large social media company, is one of my favorite bloggers out their in regards to thought leadership and digging deep into the technical aspects of Tableau and MicroStrategy. Bryan just blogged about triggering cubes and extracts on his blog. Here is a brief synopsis.

One of the functions that never seems to be included in BI tools is an easy way to kick off an application cache job once your ETL is finished. MicroStrategy’s Cubes and Tableau’s Extracts both rely on manual or time based refresh schedules, but this leaves you in a position where your data will land in the database and you’ll either have a large gap before the dashboard is updated or you’ll be refreshing constantly and wasting lots of system resources. They both come with command line tools for kicking off a refresh, but then it’s up to you to figure out how to link your ETL jobs to call these commands. What follows is a solution that works in my environment and will probably work for yours as well. There are of course a lot of ways for your ETL tool to tell your BI tool that it’s time to refresh a cache, but this is my take on it. You won’t find a download-and-install software package here since everyone’s environment is different, but you will find ample blueprints and examples for how to build your own for your platform and for whatever BI tool you use (from what I’ve observed, this setup is fairly common). Trigger was first demoed at the Tableau Conference 2014. You can jump to the Trigger demo here.

I recommend you click on the link above and give his blog post a full read. It is well worth it.

Best regards,

Michael

Tips & Tricks #13: Star Schema in MicroStrategy

Star Schema 1

A Star Schema is a design that contains only one lookup table for each hierarchy in the data model instead of having separate lookup tables for each attribute. With only a single lookup table for each hierarchy, the IDs and descriptions of all attributes in the hierarchy are stored in the same table. This type of structure involves a great degree of redundancy. As such, star schema are always completely denormalized. Let’s review the star schema above based on the MicroStrategy Tutorial data model.

The schema contains only two lookup tables, one for each hierarchy. LU_LOCATION stores the data for all of the attributes in the Location hierarchy, while LU_CUSTOMER stores the data for all of the attributes in the Customer hierarchy. As a result, star schemas contain very few lookup tables-one for each hierarchy present in the data model. Each lookup table contains the IDs and descriptions (if they exist) for all of the attribute levels in the hierarchy.

Even though you have fewer tables in a star schema than a snowflake, the tables can be much larger because each one stores all of the information for an entire hierarchy. When you need to query information from the fact table and join it to information in the lookup tables, only a single join is necessary in the SQL to achieve the desired result.

Joins in a Star Schema

As an example, if you run the same report to display customer state sales, only one join between the lookup and fact table is required to obtain the result set as illustrated below.

Star Schema 2

To join the Customer State description (Cust_State_Desc) to the Sales metric (calculated from Sales_Amt) requires only one join between tables since the Customer State ID and description are both stored in the LU_CUSTOMER table. As a result, the query has to access only one lookup table to obtain all of the necessary information for the report.

Even though achieving this result set requires only a single join, star schemas do not necessarily equate to better performance. Depending on the volume of data in any one hierarchy, you may be joining a very large lookup table to a very large fact table. In such cases, more joins between smaller tables can yield better performance.

Characteristics of a Star Schema

The following is a list of characteristics of a star schema.

  • Contains fewer tables (one per hierarchy)
  • Contains very large tables (much larger than some forms of snowflake schemas due to storing all attribute ID and description columns)
  • Store the IDs and descriptions of all the attributes in a hierarchy in a single table
  • Requires only a single join when querying fact data regardless of the attribute level at which you are querying data

—————————————————————————

Source: MicroStrategy University, MicroStrategy Advanced Data Warehousing, Course Guide, Version: ADVDW-931-Sep13-CG.

Stephen Few: Now You See It

Portland

Readers:

Stephen_Few2I was in Portland, Oregon last week attending three data visualization workshops by industry expert, Stephen Few. I was very excited to be sitting at the foot of the master for three days and soak in all of this great dataviz information.

Last Thursday, was the third workshop, Now You See It which is based on Steve’s best-selling book (see photo below).

To not give away too much of what Steve is teaching in the workshops, I have decided to discuss one of our workshop topics, human perceptual and cognitive strengths.

You can find future workshops by Steve on his website, Perceptual Edge.

Best Regards,

Michael

Now You See It

 

Designed for Humans

Good visualizations and good visualization tools are carefully designed to take advantage of human perceptual and cognitive strengths and to augment human abilities that are weak. If the goal is to count the number of circles, this visualization isn’t well designed. It is difficult to remember what you have and have not counted.

Quickly, tell me how many blue circles you see below.

Design for Humans 1

The visualization below, shows the same number of circles, however, is well designed for the counting task. Because the circles are grouped into small sets of five each, it is easy to remember which groups have and have not been counted, easy to quickly count the number of circles in each group, and easy to discover with little effort that each of the five groups contains the same number of circles (i.e., five), resulting in a total count of 25 circles.

Design for Humans 2

The arrangement below is even better yet.

Design for Humans 3

Information visualization makes possible an ideal balance between unconscious perceptual and conscious cognitive processes. With the proper tools, we can shift much of the analytical process from conscious processes in the brain to pre-attentive processes of visual perception, letting our eyes do what they do extremely well.

Stephen Few: Information Dashboard Design

Readers:

Stephen_Few2I am in Portland, Oregon this week attending three data visualization workshops by industry expert, Stephen Few. I am very excited to be sitting at the foot of the master for three days and soak in all of this great dataviz information.

Today, was the second workshop, Information Dashboard Design which is based on Steve’s best-selling book (see photo below).

To not give away too much of what Steve is teaching in the workshops, I have decided to discuss one of the dashboard exercises we did in class. The goal here was to find what we feel is wrong with the dashboard.

I will show you the dashboard first. Then, you can see our critique below.

You can find future workshops by Steve on his website, Perceptual Edge.

Best Regards,

Michael

Information Dashboard Design

 

Dashboard To Critique

CORDA Airlines Dashboard

Critique Key Points

  • Top left chart – Only left hand corner chart has anything to do with flight loading
  • Top left chart – are flight numbers useful?
  • Two Expand/Print buttons – Need more clarity (right-click on chart would be a better choice)
  • Top right chart – Poor use of pie charts – size of pies are telling largest sales channel – use small multiple bar charts, total sales as a fourth bar chart
  • Redundant use of “February” – In the title and in charts
  • Bottom left chart – why does it have a pie chart in it?
  • Bottom right chart – map may be better as a bar chart (geographical display could be useful if we had more information). Current way bubbles are being expressed is not useful (use % cancellations instead). Symbols may have a different meaning every day
  • Bottom right chart – CORDAir Logo – is this necessary?
  • Location of drop-down. Not clear if it applies to top left chart or all charts
  • Backgrounds – heavy colors, gradients
  • Instructions should be in a separate help document. Only need to learn this once.
  • Top left chart: Faint Image in background. Suppose to look like a flight seating map. Do you really want to see this every day? It is a visual distraction.
  • IMPORTANT: Is there visual context offered with any of the graphs? No. This is critical.

————————————————————————————————-

Dashboard Example Source: Website of Corda Technologies Incorporated, which has since been acquired by Domo.

Stephen Few: Show Me The Numbers

Readers:

Stephen_Few2I am in Portland, Oregon this week attending three data visualization workshops by industry expert, Stephen Few. I am very excited to be sitting at the foot of the master for three days and soak in all of this great dataviz information.

Yesterday, was the first workshop, Show Me the Numbers which is based on Steve’s best-selling book (see photo below).

To not give away too much of what Steve is teaching in the workshops, I have decided to give one “before and after” example each day with Steve’s explanation of why he made the changes he did.

You can find future workshops by Steve on his website, Perceptual Edge.

Best Regards,

Michael

Show Me the Numbers

 

“Before” Example

In the example below, the message contained in the titles is not clearly displayed in the graphs. The message deals with the ratio of indirect to total sales – how it is declining domestically, while holding steady internationally. You’d have to work hard to get this message the display as it is currently designed.

Before - Show Me the Numbers

 

“After” Example

The revised example below, however, is designed very specifically to display the intended message. Because this graph, is skillfully designed to communicate, its message is crystal clear. A key feature that makes this so is the choice of percentage for the quantitative scale, rather than dollars.

After - Show Me the Numbers

Additional Thoughts From Steve

The type of graph that is selected and the way it’s designed also have great impact on the message that is communicated. By simply switching from a line graph to a bar graph, the decrease in job satisfaction among those without college degrees in their later years is no longer as obvious.

More Thoughts - Show Me the Numbers

Tips & Tricks #12: How to Troubleshoot Cross Joins in SQL Reports for the SQL Generation Engine 9.x

MicroStrategy Community Banner

Readers:

In my last blog post, I blogged about the new MicroStrategy Community. Jaime Perez, VP of Worldwide Customer Services, and his crew have come up with a better way for us to engage with MicroStrategy as well as his team.

Speaking of Jaime, last June, he posted this great tip on the MicroStrategy Knowledgebase site as a TechNote. I am reblogging it since it is one of the most frequent questions I get asked and I find it an extremely useful Tip & Trick. Also, this will give you an idea of the great stuff being posted in the MicroStrategy Community.

Best Regards,

Michael

MicroStrategy and Cross Joins

In some scenarios, one may encounter cross joins in the SQL View of a standard, SQL Report in MicroStrategy.  Cross joins appear when two tables do not have any common key attributes between them in which they can inner join.  As a result, the two tables essentially combine together to create one table that has all the data from both tables, but this results in poorer performance with a common effect of increased execution times.  Sometimes these execution times, and performance hits, can be very severe.  Therefore, it is important to understand some simple steps that can be performed to resolve a cross join, as well as some steps to understand why it may be appearing in the SQL View of the report.

One common occurrence of a cross join is when a report contains at least two unrelated attributes in the grid, and no metrics are present in order to relate the unrelated attributes via a fact table.  Such a occurrence can be resolved in a few ways:

  1. Create a relationship filter, set the output level as the unrelated attributes (or the entire report level), and then relate these by a Logical Table object
  2. Create a relationship filter, set the output level as the unrelated attributes (or the entire report level), and then relate these by a Fact object
  3. Add a metric to the report that uses a fact from a table in which both attributes can inner join to

This provides a pathway from the fact table to the lookup tables in which the unrelated attributes are sourced from.  The result is an inner join between the fact table and the lookup tables, which resolves the cross join between the two unrelated lookup tables.

Options 1 and 2 provide a means in which the report template can remain as only attributes, whereas Option 3 would have a metric on the report.  Option 3 may not be desired if a metric does not want to be placed on the report.  Keep in mind that other techniques can also be employed to have the metric on the report, but formatted to be hidden from display.

More common scenarios include cross joins between a fact table and a lookup table, and are typically surprising to a developer.  These situations can be a bit more tricky to troubleshoot and resolve, but here are a few techniques that can be employed to try to resolve the issue:

  1. In SQL View look at where the cross join appears, and between which tables the cross join appears
  2. Open up those tables in the Table Editor by navigating to the Schema Objects\Tables folder, and double-clicking the tables
  3. Select the Logical View Tab of both tables to see all the logical objects mapped to the table
  4. Take note of which attributes have a key icon beside them
  5. These key attributes denote attributes at the lowest level of their hierarchy presently mapped to the table and/or attributes that are in their own hierarchy (meaning they have no parents or children)
  6. The SQL Engine will join 2 tables on common key attributes only, so if none of the key attributes on either table exist on both tables, then a cross join should appear

This means that just because a Region attribute exists on Table_A and a Region attribute exists on Table_B does not necessarily mean that the SQL Engine will join on Region.  If Region has its child attribute on the table, then that attribute should be the key as it is the lowest level attribute of its particular hierarchy mapped to the table.  If Region exists on both tables, and is also a key attribute on both tables, then an inner join should take place on Region.

This essentially means that one can find a cross join, investigate the tables in which it appears, and verify if at least 1 common key attribute exists between the tables.  If not, then that should be the first path to investigate because a cross join is correct in that scenario.

Video

You can find a detailed video on how this issue is reproduced and resolved here: Tech Note 71019 . Steps to reproduce and resolve

Note

MicroStrategy Technical Support can assist with resolving cross joins in a specific report, however caution should be taken when resolving such issues.  In some scenarios, the cross join is resolved through modifications to the schema objects, which can have a ripple effect to all other reports in an environment.  For example, if a relationship is changed in the Region attribute to resolve a cross join in one report, this change will be reflected in all other reports that use Region, and potentially the hierarchy in which Region belongs.  As a result, the SQL View of one report will have the cross join resolved, but the SQL may have changed in other reports using Region or its related attributes.  This may or may not be desired.  MicroStrategy Technical Support may not be able to fully understand the impact of such a schema change to the data model, so before a change is made to the data model the consequences of such a change should be fully understood by the developer, and any changes made to the schema should be recorded.

References

[1] Jaime Perez, TN47356: How to troubleshoot cross joins in SQL Reports for the SQL Generation Engine 9.x, MicroStrategy Community, 06/24/2014, http://community.microstrategy.com/t5/Architect/TN47356-How-to-troubleshoot-cross-joins-in-SQL-Reports-for-the/ta-p/196989.

[2] MicroStrrategy Knowledgebase, Tech Note 71019 . Steps to reproduce and resolve,

Follow

Get every new post delivered to your Inbox.

Join 101 other followers