Author Archive: Michael

Fast Company: Is Flat Design Already Passé?

Source: John Brownlee, Is Flat Design Already Passé?, Fast Company, Co.DESIGN, April 11, 2014, http://www.fastcodesign.com/3028944/is-flat-design-already-passe.

Skeuomorph CalendarBlog Note: A skeuomorph is a derivative object that retains ornamental design cues from structures that were necessary in the original. Examples include pottery embellished with imitation rivets reminiscent of similar pots made of metal and a software calendar that imitates the appearance of binding on a paper desk calendar (see image to the right).

Over the last few years, we’ve seen an upheaval in the way computer interfaces are designed. Skeuomorphism is out, and flat is in. But have we gone too far? Perhaps we’ve taken the skeuomorphic death hunts as far as they can go, and its high time we usher in a new era of post-flat digital design.

John Brownlee

John Brownlee

Ever since the original Macintosh redefined the way we interact with computers by creating a virtual desktop, computer interfaces have largely been skeuomorphic by mimicking the look of real-world objects. In the beginning, computer displays were limited in resolution and color, so the goal was to make computer interfaces look as realistic as possible. And since most computer users weren’t experienced yet, skeuomorphism became a valuable tool to help people understand digital interfaces.

But skeuomorphism didn’t make sense once photo-realistic computer displays became ubiquitous. Computers have no problem tricking us into thinking that we’re looking at something real so we don’t need to use tacky design tricks like fake stitching or Corinthian leather to fool us into thinking our displays are better than they are. In addition, most people have grown up in a world where digital interfaces are common. UI elements don’t have to look like real-world objects anymore for people to understand them.

This is why Jony Ive took over the design of Apple’s iOS and OS X operating systems and began a relentless purge of the numerous skeuomorphic design elements that his predecessor, skeuomaniac Scott Forstall, created. To quote Fast Company’s own John Pavlus, “skeuomorphism is a solution to a problem that iOS no longer has,” and that’s true of other operating systems and apps too. Google, Microsoft, Twitter, Facebook, Dropbox, even Samsung, they’re all embracing flat design, throwing out the textures and gradients that once defined their products, in favor of solid hues and typography-driven design.

This is, without a doubt, a good thing. Skeuomorphism led to some exceedingly one-dimensional designs, such as iOS 6′s execrable billiard-style Game Center design. But in an excellent post, Collective Ray designer Wells Riley argues that things are going too far.

Flat design is essentially as far away from skeuomorphism as you can get. Compare iOS 7′s bold colors, unadorned icons, transparent overlays, and typography-based design to its immediate precessor, iOS 6. Where once every app on iOS had fake reflections, quasi-realistic textures, drop shadows, and pseudo-3-D elements, iOS 7 uses pure colors, no gradients, no shadows, and embraces the 2-D nature of a modern smartphone display. But while flat design has made iOS 7 look remarkably consistent, it has also removed a lot of personality from the operating system. It’s like the Gropius house, when the old iOS 6 was a circus funhouse. Maybe it needs to get a little bit of that sense of madness back.

Here’s how Riley defines elements of a post-flat interface:

• Hierarchy defined using size and composition along with color.

• Affordant buttons, forms, and interactive elements.

• Skeuomorphs to represent 1:1 analogs to real-life objects (the curl of an e-book page, for example) in the name of user delight.

• Strong emphasis on content, not ornamentation.

• Beautiful, readable typography.

 

Riley’s argument is that flat design has allowed digital designers to brush the slate clean in terms of how they approach their work, but it has also hindered a sense of wonder and whimsy. Software should still strongly emphasizes content, color, and typography over ornamentation, but why is, say, the curl of a page when you’re reading an e-book such a crime, when it so clearly gives users delight?

“Without strict visual requirements associated with flat design, post-flat offers designers tons of variety to explore new aesthetics—informed by the best qualities of skeuomorphic and flat design.” Riley writes. “Dust off your drop shadows and gradients, and introduce them to your flat color buttons and icons. Do your absolute best work without feeling restricted to a single aesthetic. Bring variety, creativity, and delight back to your interfaces.”

Maybe Riley has a point. Why should mad ol’ Scott Forstall be allowed to ruin skeuomorphism for everyone? With the lightest of brush strokes, skeuomorphism can be used to bring back a sense of personality and joy to our apps. For those of us growing listless in the wake of countless nearly identical “flat” app designs, he makes a good point. It is time for the pendulum towards flat and away from skeuomorphism to swing back, if only a little bit.

An Introduction to Data Blending – Part 4 (Data Blending Design Principles)

Readers:

In Part 3 of this series on data blending, we examining the benefits of blending data. We also reviewed an example of data blending that illustrated the possible outcomes of an election for the District 2 Supervisor of San Francisco.

Today, in Part 4 of this series, we will discuss data blending design principles and show another illustrative example of data blending using Tableau.

Again, much of Parts 1, 2, 3 and 4 are based on a research paper written by Kristi Morton from The University of Washington (and others) [1].

You can learn more about Ms. Morton’s research as well as other resources used to create this blog post by referring to the References at the end of the blog post.

Best Regards,

Michael

Data Blending Design Principles

In Part 3, we describe the primary design principles upon which Tableau’s data blending feature was based. These principles were influenced by the application needs of Tableau’s end-user. In particular, we designed the blending system to be able to integrate datasets on-the-fly, be responsive to change, and driven by the visualization. Additionally, we assumed that the user may not know exactly what she is looking for initially, and needs a flexible, interactive system that can handle exploratory visual analysis.

Push Computation to Data and Minimize Data Movement

Tableau’s approach to data visualization allows users to leverage the power of a fast database system. Tableau’s VizQL algebra is a declarative language for succinctly describing visual representations of data and analytics operations on the data. Tableau compiles the VizQL declarative formalism representing a visual specification into SQL or MDX and pushes this computation close to the data, where the fast database system handles computationally intensive aggregation and filtering operations. In response, the database provides a relatively small result set for Tableau to render. This is an important factor in Tableau’s choice of post-aggregate data integration across disparate data sources – since the integrated result sets must represent a cognitively manageable amount of information, the data integration process operates on small amounts of aggregated, filtered data from each data source. This approach avoids the costly migration effort to collocate massive data sets in a single warehouse, and continues to leverage fast databases for performing expensive queries close to the data.

Automate as Much as Possible, but Keep User in Loop

Tableau’s primary focus has been on ease of use since most of Tableau’s end-users are not database experts, but range from a variety of domains and disciplines: business analysts, journalists, scientists, students, etc. This lead them to take a simple, pay-as-you-go integration approach in which the user invests minimal upfront effort or time to receive the benefits of the system. For example, the data blending system does not require the user to specify schemas for their data sets, rather the system tries to infer this information as well as how to apply schema matching techniques to blend them for a given visualization. Furthermore, the system provides a simple drag-and-drop interface for the user to specify the fields for a visualization, and if there are fields from multiple data sources in play at the same time, the blending system infers how to join them to satisfy the needs of the visualization.

In the case that something goes wrong, for example, if the schema matching could not succeed, the blending system provides a simple interface for specifying data source relationships and how blending should proceed. Additionally, the system provides several techniques for managing the impact of dirty data on blending, which we discuss in more in Part 5 of this series.

Another Example: Patient Falls Dashboard [3]

NOTE: The following example is from Jonathan Drummey via the Drawing with Numbers blog site. The example uses Tableau v7, but at the end of the instructions on how he creates this dashboard in Tableau v7, Mr. Drummey includes instructions how the steps became more simplied in Tableau v8. I have included a reference to this blog post on his site in the reference section of my blog entry. The “I”, “me” voice you read in this example is that of Mr. Drummey.

As part of improving patient safety, we track all patient falls in our healthcare system, and the number of patient days – the total of the number of days of inpatient stays at the hospital. Every month report we report to the state our “fall rate,” a metric of the number of falls with injury for certain units in the hospital per 1000 patient days, i.e. days that patients are at the hospital. Our annualized target is to have less than 0.7 falls with injury per 1000 patient days.

A goal for our internal dashboard is to show the last 13 months of fall rates as a line chart, with the most recent fall events as a bar chart, in a combined chart, along with a separate text table showing some details of each fall event. Here’s the desired chart, with mocked-up data:

 

combo bars and lines

On the surface, blending this data seems really straightforward. We generate a falls rate very month for every reporting unit, so use that as the primary, then blend in the falls as they happen. However, this has the following issues:

  • Sparse Data – As I’m writing this, it’s March 7th. We usually don’t get the denominator of the patient days for the prior month (February) for a few more days yet, so there won’t be any February row of measure data to use as the primary to get the February fall events to show on the dashboard. In addition, there still wouldn’t be any March data to get the March fall events. Sometimes when working with blend, the solution is to flip our choices for the primary and secondary datasource. However, that doesn’t work either because a unit might go for months or years without a patient fall, so there wouldn’t be any fall events to blend in the measure data.
  • Falls With and Without Injury – In the bar chart, we don’t just want to show the number of patient falls, we want to break down the falls by whether or not they were falls with injury – the numerator for the fall rate metric – and all other falls. The goal of displaying that data is to help the user keep in mind that as important as it is to reduce the number of falls with injury, we also need to keep the overall number of falls down as well. No fall = no chance of fall with injury.
  • Unit Level of Detail – Because the blend needs to work at the per-unit level of detail as well as across all reporting units, that means (in version 7 at least) that the Unit needs to be in the view for the blend to work. But we want to display a single falls rate no matter how many units are selected.

Sparse Data

To deal with issue of sparse data, there are a few possible solutions:

  • Change the combined line and bar chart into separate charts. This would perhaps be the easiest, though it would require some messing about with filters, hidden reference lines, and continuous date axes to ensure that the two charts had similar axis ranges no matter what. However, that would miss out on the key capability of the combined chart to directly see how a fall contributes to the fall rate. In addition, there would be no reason to write this blog post. :)
  • Perform padding in the data source, either via a query/view or Custom SQL. In an earlier version of this project I’d built this, and maintaining a bunch of queries with Cartesian joins isn’t my favorite cup of tea.
  • Building a scaffold data source with all combinations of the month and unit and using the scaffold as the primary data source. While possible, this introduces maintenance issues when there’s a need for additional fields at a finer level of detail. For example, the falls measure actually has three separate fall rates – monthly, quarterly, and annual. These are generated as separate rows in our measures data and the particular duration is indicated by the Period field. So the scaffold source would have to include the Period field to get the data, but then that could be too much detail for the blended fall event data, and make for more complexity in the calculations to make sure the aggregations worked properly.
  • Do a tiny bit of padding in the query, then do the rest in Tableau via Show Missing Values aka domain padding. As I’d noted in an earlier post on blending, domain padding occurs before data is blended so we can pad out the measure data through the current date and then include all the falls. This is the technique I chose, for the reason that padding one row to the data is trivial and turning on Show Missing Values is a couple of mouse clicks. Here’s how I did that:

In my case, the primary data source is a Microsoft Access query that gets the falls measure results from a table that also holds results for hundreds of other metrics that we track. I created a second query with the same number of columns that returns Null for every field except the Measure Date, which has a value of 1/1/1900. Then a third query UNION’s those two queries together, and that’s what is used as the data source in Tableau.

Then, in Tableau, I added a calculated field called Date with the following formula:

//used for padding out display to today
IF [Measure Date] == #1/1/1900# THEN 
    TODAY() 
ELSE 
    [Measure Date] 
END

The measure results data contains a row per measure, reporting unit, and the period. These are pre-calculated because the data is used in a variety of different outputs. Since in this dashboard we are combining the results across units, we can’t just use the rate, we need to go back to the original numerator and denominator. So, I also created a new field for the Calculated Rate:

SUM([Numerator])/SUM([Denominator])

Now it’s possible to start building the line chart view:

  1. Put the Month(Date) – the full month/year version as a discrete – on Columns, Calculated Rate on Rows, Period on the Color Shelf. This only shows the data that exists in the data source, including the empty value for the current month (March in this case):

 

Screen Shot 2013-03-09 at 1.11.25 PM

 

  1. Turn on Show Missing Values for Month(Date) to start domain padding. Now we can see the additional column(s) for the month(s) – February in this case between January to the current month that Tableau has added in:

 

Screen Shot 2013-03-09 at 1.14.19 PM

 

With a continuous (green pill) date, this particular set-up won’t work in version 8. Tableau’s domain padding is not triggered when the last value of the measure is Null. I’m hoping this is just an issue with the beta, I’ll revise this section with an update once I find out what’s going on.

Even though the measure data only has end of month dates, instead of using Exact Date for the month I used Month(Date) because of two combined factors: One is that the default import of most date fields from MS Jet sources turns them into DateTime fields, the second is that Show Missing Values won’t work on an Exact Date for a DateTime field, you have to assign an aggregation to a DateTime (even Second will work). This is because domain padding at this level can create an immense number of new rows and cause Tableau to run out of memory, so Tableau keeps the option off unless you want it. Also note that you can turn on Show Missing Values for an Exact Date for a Date Field.

  1. Now for some cleanup steps: for the purposes of this dashboard, filter Period to remove Monthly (we do quarterly reporting), but leave in Null because that’s needed for the domain padding.
  2. Right-click Null on the Color Legend and Hide it. Again, we don’t exclude this because this would cause the extra row for the domain padding to fail.
  3. Set up a relative date filter on the Date field for the last 13 months. This filter works just fine with the domain padding.

Filtering on Unit

Here’s a complicating factor: If we add a filter on Unit, there’s a Null listed here:

 

Screen Shot 2013-03-09 at 1.18.31 PM

I’d just want to see the list of units. But if we filter that Null out, then we lose the domain padding, the last date is now January 2013:

 

Screen Shot 2013-03-09 at 1.18.58 PM

 

One solution here would be to alter the padding to add a padding row for every unit, instead of just one unit. Since Tableau doesn’t let us just hide elements in a filter, and we actually have more reporting units in our data than we are displaying on the dashboards, I chose to use a parameter filter because there are more reporting units in our production data than we are displaying on the dashboards, yet the all-unit rate needs to include all of the data. Setting this up included a parameter with All and each of the units, and a calculated field called “Chosen Unit Filter” with the following formula, that is set to Filter on False:

[Choose Unit] == "All" OR [Choose Unit] == [Unit]

Falls With and Without Injury

In a fantasy world, to create the desired stacked bars I’d be able to drag the Number of Records from the secondary datasource, i.e. the number of fall events, drag an Injury indicator onto the Color Shelf, and be done. However, that runs into the issue of having a finer level of detail in the secondary than in the primary, which I’ll walk through solutions for in the next section. In this case, since there are only two different numbers, the easy way is to generate two separate measures, then use Measure Names/Measure Values to create the stacked bars – Measure Values on Rows, and Measure Names on the Color Shelf. Here’s the basic calculation for Falls with Injury:

SUM(IF [Injury] != "None" THEN 1 ELSE 0 END)

We’re using a row-level calculated field to generate the measure, and a slightly different calc for Falls w/out Injury.

Unit Level of Detail

When we want to blend in Tableau at a finer level of detail and aggregate to a higher level, historically there have been three options:

  • Don’t use blending at all, instead use a query to perform the “blend” outside of Tableau. In the case that there are totally different data sources, this can be more difficult but not impossible by using one of the systems or a different system to create a federated data source, for example by adding your Oracle table as an ODBC connection to your Excel data, then making the query on that. In this case, we don’t have to do that.
  • Use Tableau’s Primary Groups feature “push” the detail from the secondary into the primary data source. This is a really helpful feature, the one drawback is that it’s not dynamic so any time there are new groupings in the secondary it would have to be re-run. Personally, I prefer automating as much as possible so I tend not to use this technique.
  • Set up the view with the needed dimensions in the view – on the Level of Detail Shelf, for example – and then use table calculations to do the aggregation. This is how I’ve typically built this kind of view.

Tableau version 8 adds a fourth option:

  • Tell Tableau what fields to blend on, then bring in your measures from the secondary.

I’ll walk through the table calculation technique, which works the same in version 7 and version 8, and then how to take advantage of v8′s new feature.

Using Table Calculations to Aggregate Blended Data

In order to blend the the falls data at the hospital unit level to make sure that we’re only showing falls for the selected unit(s), the Unit has to be in the view (on the Rows, Columns, or Pages Shelves, or on the Marks Card). Since we don’t actually need to display the Unit, the Level of Detail Shelf is where we’ll put that dimension. However, just adding that to the view leads to a bar for each unit, for example for April 2012 one unit had one fall with injury and another had two, and two units each had two falls without injury.

 

Screen Shot 2013-03-09 at 1.30.27 PM

 

To control things like tooltips (along with performance in some cases), it’s a lot easier to have a single bar for each month/measure. To do that, we turn to a table calculation, here’s the Falls w/Injury for v7 Blend calculated field, set up in the secondary data source:

IF FIRST()==0 THEN
	TOTAL([Falls w/Injury])
END

This table calculation has a Compute Using of Unit, so it partitions on the Month of Date. The IF FIRST()==0 part ensures that there is only one mark per partition. I’m using the TOTAL() aggregation here because it’s easier to set up and maintain. The alternative is to use WINDOW_SUM(), but in Tableau prior to version 7 there are some performance issues, so the calc would be:

IF FIRST()==0 THEN
	WINDOW_SUM(SUM(Falls w/Injury]), 0, IIF(FIRST()==0,LAST(),0))
END

The ,0 IIF(FIRST()==0,LAST(),0 part is necessary in version 7 to optimize performance, you can get rid of that in version 8.

You can also do a table calculation in the primary that accesses fields in the secondary, however TOTAL() can’t be used across blended data sources, so you’d have to use the WINDOW_SUM version.

With a second table calculation for the Falls w/out Injury, now the view can be built, starting with the line chart from above:

  1. Add Measure Names (from the Primary) to Filters Shelf, filter it for a couple of random measures.
  2. Put Measure Values on the Rows Shelf.
  3. Click on the Measure Values pill on Rows to set the Mark Type to Bar.
  4. Drag Measure Names onto the Color Shelf (for the Measure Values marks).
  5. Drag Unit onto the Level of Detail Shelf (for the Measure Values marks).
  6. Switch to the Secondary to put the two Falls for v7 Blend calcs onto the Measure Values Shelf.
  7. Set their Compute Usings to Unit.
  8. Remove the 2 measures chosen in step 1.
  9. Clean up the view – turn on dual axes, move the secondary axis marks to the back, change the axis tick marks to integers, set axis titles, etc.

This is pretty cool, we’re using domain padding to fill in for non-existent data and then having a blend happening at one level of detail while aggregating to another, just for the second axis. Here’s the v7 workbook on Tableau Public:

Patient Falls Dashboard - Click on Image to go to Tableau Public

Patient Falls Dashboard – Click on image above to go to Tableau Public

Tableau Version 8 Blending – Faster, Easier, Better

For version 8, Tableau made it possible to blend data without requiring the linking fields in the view. Here’s how I build the above v7 view in v8:

  1. Add Measure Names (from the Primary) to Filters Shelf, filter it for a couple of random measures.
  2. Put Measure Values on the Rows Shelf.
  3. Click on the Measure Values pill on Rows to set the Mark Type to Bar.
  4. Drag Measure Names onto the Color Shelf (for the Measure Values marks).
  5. Switch to the Secondary and click the chain link icon next to Unit to turn on blending on Unit.
  6. Drag the Falls w/Injury and Falls w/out Injury calcs onto the Measure Values Shelf.
  7. Remove the 2 measures chosen in step 1.
  8. Clean up the view – turn on dual axes, move the secondary axis marks to the back, change the axis tick marks to integers, set axis titles, etc.

The results will be the same as v7.

Next: Tableau’s Data Blending Architecture

———————————————————————————-

References:

[1] Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau, University of Washington and Tableau Software, Seattle, Washington, March 2012, http://homes.cs.washington.edu/~kmorton/modi221-mortonA.pdf.

[2] Hans Rosling, Wealth & Health of Nations, Gapminder.org, http://www.gapminder.org/world/.

[3] Jonathan Drummey, Tableau Data Blending, Sparse Data, Multiple Levels of Granularity, and Improvements in Version 8, Drawing with Numbers, March 11, 2013, http://drawingwithnumbers.artisart.org/tableau-data-blending-sparse-data-multiple-levels-of-granularity-and-improvements-in-version-8/.

 

Tips & Tricks #9: How Do Changes on the Source Report (Dataset) Get Reflected in MicroStrategy Report Services 9.x Documents

In MicroStrategy Report Services Documents, document datasets and their original source reports  (such as a grid report being used as a dataset) are not completely connected to each other. Depending on the changes made on the source report, it can be reflected differently on the document. Basically, the change can be divided into two types.

Type 1 – Formatting Changes

Formatting changes, for example, changing autostyles, thresholds, subtotals.

If a user chooses the option Add to Section without Formatting, the grid/graph showing on the document will not use the report’s stored formatting. Any formatting changes on the source report will not be reflected on the document.

Tip 9-1If a user chooses the option Add to Section with Formatting, the grid will be added with the current format of the report.  However, any formatting changes made to the source report AFTER the dataset has been included in the document will NOT be reflected on the document.

For example, a user disabled the subtotal (see screenshot below) for the original report after the report has been included as a dataset in a document, the document will still show the subtotals.

Tip 9-2

To force the document to recognize the report’s formatting changes, the user needs to delete the grid/graph from document section and add it again using the With Formatting option. By doing this, the latest formatting properties of the grid/graph on the source report are retrieved.

Type 2 – Adding/Removing Objects and Modifying Report Filters

Another type of changes made on the report involving adding/removing objects and modifying report filters.

Unlike the formatting change, this type of change does carry over from the source report to the document datasets.

For example, if users add/remove/modify report filters (see screenshot below), the change will be reflected on the data when running the document.

Tip 9-3

If an object is removed from the source report (see screenshot below), the user can see the change in the document’s Dataset Objects window. After the user runs the document, the object will be removed from grid/graph on the document.

Tip 9-4

If an object is added to the report (see screenshot below), the change will show in the document’s dataset objects window. However, the object will not be automatically added to the grid/graph. The user has to manually add the object to the grid/graph or add the dataset to the section again to make it show up.

Tip 9-5

NOTE:   As of MicroStrategy 9.0, a new feature was introduced where a user can add a dataset report to a Report Services document as a shortcut by selecting the Add to Section As a shortcut option, as shown below:

Tip 9-6

If the grid/graph is added to the document using this option, then the document would be updated automatically if ANY type of change is made on the source report.

An Introduction to Data Blending – Part 3 (Benefits of Blending Data)

Readers:

In Part 2 of this series on data blending, we delved deeper into understanding what data blending is. We also examined how data blending is used in Hans Rosling’s well-known Gapminder application.

Today, in Part 3 of this series, we will dig even deeper by examining the benefits of blending data.

Again, much of Parts 1, 2 and 3 are based on a research paper written by Kristi Morton from The University of Washington (and others) [1].

You can learn more about Ms. Morton’s research as well as other resources used to create this blog post by referring to the References at the end of the blog post.

Best Regards,

Michael

Benefits of Blending Data

In this section, we will examine the advantages of using the data blending feature for integrating datasets. Additionally, we will review another illustrative example of data blending using Tableau.

Integrating Data Using Tableau

In Ms. Morton’s research, Tableau was equipped with two ways of integrating data. First, in the case where the data sets are collocated (or can be collocated), Tableau formulates a query that joins them to produce a visualization. However, in the case where the data sets are not collocated (or cannot be collocated), Tableau federates queries to each data source, and creates a dynamic, blended view that consists of the joined result sets of the queries. For the purpose of exploratory visual analytics, Ms. Morton (et al) found that data blending is a complementary technology to the standard collocated approach with the following benefits:

  • Resolves many data granularity problems
  • Resolves collocation problems
  • Adapts to needs of exploratory visual analytics

Figure 1 - Company Tables

Image: Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau. [1]

Resolving Data Granularity Problems

Often times a user wants to combine data that may not be at the same granularity (i.e. they have different primary keys). For example, let’s say that an employee at company A wants to compare the yearly growth of sales to a competitor company B. The dataset for company B (see Figure 1 above) contains a detailed quarterly growth of sales for B (quarter, year is the primary key), while company A’s dataset only includes the yearly sales (year is the primary key). If the employee simply joins these two datasets on yearly earnings, then each row from A will be duplicated for each quarter in B for a given year resulting in an inaccurate overestimate of A’s yearly earnings.

This duplication problem can be avoided if for example, company B’s sales dataset were first aggregated to the level of year, then joined with company A’s dataset. In this case, data blending detects that the data sets are at different granularities by examining their primary keys and notes that in order to join them, the common field is year. In order to join them on year, an aggregation query is issued to company B’s dataset, which returns the sales aggregated up to the yearly level as shown in Figure 1. This result is blended with company A’s dataset to produce the desired visualization of yearly sales for companies A and B.

The blending feature does all of this on-the-fly without user-intervention.

Resolves Collocation Problems

As mentioned in Part 1, managed repository is expensive and untenable. In other cases, the data repository may have rigid structure, as with cubes, to ensure performance, support security or protect data quality. Furthermore, it is often unclear if it is worth the effort of integrating an external data set that has uncertain value. The user may not know until she has started exploring the data if it has enough value to justify spending the time to integrate and load it into her repository.

Thus, one of the paramount benefits of data blending is that it allows the user to quickly start exploring their data, and as they explore the integration happens automatically as a natural part of the analysis cycle.

An interesting final benefit of the blending approach is that it enables users to seamlessly integrate across different types of data (which usually exist in separate repositories) such as relational, cubes, text files, spreadsheets, etc.

Adapts to Needs of Exploratory Visual Analytics

A key benefit of data blending is its flexibility; it gives the user the freedom to view their blended data at different granularities and control how data is integrated on-the-fly. The blended views are dynamically created as the user is visually exploring the datasets. For example, the user can drill-down, roll-up, pivot, or filter any blended view as needed during her exploratory analysis. This feature is useful for data exploration and what-if analysis.

Another Illustrative Example of Data Blending

Figure 2 (below) illustrates the possible outcomes of an election for District 2 Supervisor of San Francisco. With this type of visualization, the user can select different election styles and see how their choice affects the outcome of the election.

What’s interesting from a blending standpoint is that this is an example of a many-to-one relationship between the primary and secondary datasets. This means that the fields being left-joined in by the secondary data sources match multiple rows from the primary dataset and results in these values being duplicated. Thus any subsequent aggregation operations would reflect this duplicate data, resulting in overestimates. The blending feature, however, prevents this scenario from occurring by performing all aggregation prior to duplicating data during the left-join.

Figure 2 - San Francisco Election

 Image: Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau. [1]

Next: Data Blending Design Principles

——————————————————————————————————–

References:

[1] Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau, University of Washington and Tableau Software, Seattle, Washington, March 2012, http://homes.cs.washington.edu/~kmorton/modi221-mortonA.pdf.

[2] Hans Rosling, Wealth & Health of Nations, Gapminder.org, http://www.gapminder.org/world/.

An Introduction to Data Blending – Part 2 (Hans Rosling, Gapminder and Data Blending)

Readers:

In Part 1 of this series on data blending, we began to explore the concepts of data blending as well as the life-cycle of visual analysis.

Today, in Part 2 of this series, we will dig deeper into how data blending works.

Again, much of Parts 1, 2 and 3 are based on a research paper written by Kristi Morton from The University of Washington (and others) [1].

You can learn more about Ms. Morton’s research as well as other resources used to create this blog post by referring to the References at the end of the blog post.

Best Regards,

Michael

Data Blending Overview

Data Blending allows an end-user to dynamically combine and visualize data from multiple heterogeneous sources without any upfront integration effort. [1] A user authors a visualization starting with a single data source – known as the primary – which establishes the context for subsequent blending operations in that visualization. Data blending begins when the user drags in fields from a different data source, known as a secondary data source. Blending happens automatically, and only requires user intervention to resolve conflicts. Thus the user can continue modifying the visualization, including bringing in additional secondary data sources, drilling down to finer-grained details, etc., without disrupting their analytical flow. The novelty of this approach is that the entire architecture supporting the task of integration is created at runtime and adapts to the evolving queries in typical analytical workflows.

A Simple Illustrative Example

In this section we will discuss a scenario in which three unique data sources (see left half of Figure 1 below for sample tables) are blended together to create the visualization shown in Figure 2 below. This is a simple, yet compelling mashup of three unique measures that tells an interesting story about the complexities of global infant mortality rates in the year 2000.

Figure 1

 

Image: Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau. [1]

In this example, the user wants to understand if there is a connection between infant mortality rates, GDP, and population. She has three distinct spreadsheets with the following characteristics: the first data source contains information about the infant mortality rates per 1000 live births for each country, the second contains information about each country’s total population, and the third source contains country-level GDP. For this analysis task, the user drags the fields, “Country or Area” and “Infant mortality rate per 1000 live births”, from her first data source onto the blank visual canvas. Since these fields were the first ones selected by the user, then the data source associated with these fields becomes the primary data source.

This action produces a visualization showing the relative infant mortality rates for each country. But the user wants to understand if there is a correlation between GDP and infant mortality, so she then drags the “GDP per capita in US dollars” field onto the current visual canvas from Data Table A. The step to join the GDP measure from this separate data source happens automatically: the blending system detects the common join key (ı.e. “Country or Area”) and combines the GDP data with the infant mortality data for each country. Finally, to complete her analysis task, she adds the “Population” measure from Data Table B, to the visual canvas, which produces the visualization in Figure 2 below associated with the blended data table in Figure 1.

 

Figure 2

Image: Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau. [1]

Hans Rosling, Gapminder and Data Blending

The Gapminder World interactive graph below shows how long people live and how the number of children a woman has is affected by how much money they earn using different data sources.

Gapminder World for Windows

Image: Hans Rosling’s Wealth and Health of Nations (Gapminder.org) [2]

Hans RoslingIn the screenshot above, the y-axis shows us Children per women (total fertility) . The x-axis shows us Income per person (GDP/capita, PPP$ inflation-adjusted). The series data points (the bubbles) show us population for each country. If you were to click the Play button, you would see as an interactive “slide show” how countries have developed since 1800.

This demonstrates the flexibility of the data blending feature, namely that users can dynamically change their blended views by pivoting on different data sources and measures to blend in their visualizations.

In the screenshot below, Mr. Rosling explains how to use the interactive Gapminder World application.

Also, Mr. Rosling has provided Gapminder World Offline, which you can use to show animated statistics from your own laptop! It can be run on Windows, Mac and Linux. Here is a link to the download installation page on the Gapminder.org site.

And here is a link to the PDF for the Gapminder World Guide show above.

Gapminder World Guide

Image: Hans Rosling’s Gapminder World Guide (PDF) [2]

Next: Benefits of Blending Data

——————————————————————————————————–

References:

[1] Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau, University of Washington and Tableau Software, Seattle, Washington, March 2012, http://homes.cs.washington.edu/~kmorton/modi221-mortonA.pdf.

[2] Hans Rosling, Wealth & Health of Nations, Gapminder.org, http://www.gapminder.org/world/.

An Introduction to Data Blending – Part 1 (Introduction, Visual Analysis Life-cycle)

Readers:

Today I am beginning a multi-part series on data blending.

  • Parts 1, 2 and 3 will be an introduction and overview of what data blending is.
  • Part 4 will review an illustrative example of how to do data blending in Tableau.
  • Part 5 will review an illustrative example of how to do data blending in MicroStrategy.

I may also include a Part 6, but I have to see how my research on this topic continues to progress over the next week.

Much of Parts 1, 2 and 3 are based on a research paper written by Kristi Morton from The University of Washington (and others) [1].

Please review the source references, at the end of each blog post in this series, to be directed to the source material for additional information.

I hope you find this series helpful for your data visualization needs.

Best Regards,

Michael

Introduction

Tableau and MicroStrategy’s new Analytics Platform are commercial business intelligence (BI) software tools that support interactive, visual analysis of data. [1]

Using a Web-based visual interface to data and a focus on usability, these tools enable a wide audience of business partners (IT’s end-users) to gain insight into their datasets. The user experience is a fluid process of interaction in which exploring and visualizing data takes just a few simple drag-and-drop operations (no programming skills or DB experience is required). In this context of exploratory, ad-hoc visual analysis, we will explore a feature originally introduced in Tableau in 2006, and in MicroStrategy’s new Analytics Platform v9.4.1 late last year (2013).

We will examine how we can integrate large, heterogeneous data sources. This feature is called data blending, which gives users the ability to create data visualization mashups from structured, heterogeneous data sources dynamically without any upfront integration effort. Users can author visualizations that automatically integrate data from a variety of sources, including data warehouses, data marts, text files, spreadsheets, and data cubes. Because data blending is workload driven, we are able to bypass many of the pain points and uncertainty in creating mediated schemas and schema-mappings in current pay-as-you-go integration systems.

The Cycle of Visual Analysis

Unlike databases, our human brains have limited capacity for managing and making sense of large collections of data. In database terms, the feat of gaining insight in big data is often accomplished by issuing aggregation and filter queries (producing subsets of data).

However, this approach can be time-consuming. The user is forced to complete the following tasks.

  1. Figure out what queries to write.
  2. Write the queries.
  3. Wait for the results to be returned back in textual format. And, then finally,
  4. Read through these textual summaries (often containing thousands of rows) to search for interesting patterns or anomalies.

Tools like Tableau and MicroStrategy help bridge this gap by providing a visual interface to the data. This approach removes the burden of having to write queries. The user can ask their questions through visual drag-and-drop operations (again, no queries or programming experience required). Additionally, answers are displayed visually, where patterns and outliers can quickly be identified.

Visualizations leverage the powerful human visual system to help us effectively digest large amounts of information and disseminate it quicker.

Cycle of Visual Analysis

Image: Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau. [1]

Figure 1, above, illustrates how visualization is a key component in turning information into knowledge and knowledge into wisdom.

Ms. Morton discusses the process as follows,

The process starts with some task or question that a knowledge worker (shown at the center) seeks to gain understanding. In the first stage, the user forages for data that may contain relevant information for their analysis task. Next, they search for a visual structure that is appropriate for the data and instantiate that structure. At this point, the user interacts with the resulting visualization (e.g. drill down to details or roll up to summarize) to develop further insight.

Once the necessary insight is obtained, the user can then make an informed decision and take action. This cycle is centered around and driven by the user and requires that the visualization system be flexible enough to support user feedback and allow alternative paths based on the needs of the user’s exploratory tasks. Most visualization tools, however, treat this cycle as a single, directed pipeline, and offer limited interaction with the user. Moreover, users often want to ask their analytical questions over multiple data sources. However, the task of setting up data for integration is orthogonal to the analysis task at hand, requiring a context switch that interrupts the natural flow of the analysis cycle. We extend the visual analysis cycle with a new feature called data blending that allows the user to seamlessly combine and visualize data from multiple different data sources on-the-fly. Our blending system issues live queries to each data source to extract the minimum information necessary to accomplish the visual analysis task.

Often, the visual level of detail is at a coarser level than the data sets. Aggregation queries, therefore, are issued to each data source before the results are copied over and joined in Tableau’s local in-memory view. We refer to this type of join as a post-aggregate join and find it a natural fit for exploratory analysis, as less data is moved from the sources for each analytical task, resulting in a more responsive system.

Finally, Tableau’s data blending feature automatically infers how to integrate the datasets on-the-fly, involving the user only in resolving conflicts. This system also addresses a few other key data integration challenges, including combining datasets with mismatched domains or different levels of detail and dirty or missing data values. One interesting property of blending data in the context of a visualization is that the user can immediately observe any anomalies or problems through the resulting visualization.

These aforementioned design decisions were grounded in the needs of Tableau’s typical BI user base. Thanks to the availability of a wide-variety of rich public datasets from sites like data.gov, many f Tableau’s users integrate data from external sources such as the Web or corporate data such as internally-curated Excel spreadsheets into their enterprise data warehouses to do predictive, what-if analysis.

However, the task of integrating external data sources into their enterprise systems is complicated. First, such repositories are under strict management by IT departments, and often IT does not have the bandwidth to incorporate and maintain each additional data source. Second, users often have restricted permissions and cannot add external data sources themselves. Such users cannot integrate their external and enterprise sources without having them collocated.

An alternative approach is to move the data sets to a data repository that the user has access to, but moving large data is expensive and often untenable. We therefore architected data blending with the following principles in mind: 1) move as little data as possible, 2) push the computations to the data, and 3) automate the integration challenges as much as possible, involving the user only in resolving conflicts.

Next: Data Blending Overview

——————————————————————————————————–

References:

[1] Kristi Morton, Ross Bunker, Jock Mackinlay, Robert Morton, and Chris Stolte, Dynamic Workload Driven Data Integration in Tableau, University of Washington and Tableau Software, Seattle, Washington, March 2012, http://homes.cs.washington.edu/~kmorton/modi221-mortonA.pdf.

A Must Read: Bryan Brandow’s BI Blog on Evolving BI

Readers:

I have some good news to share with you.

Bryan BrandowBryan Brandow, a Data Engineering Manager, has started (or perhaps restarted) blogging again. His new blog is called Bryan’s BI Blog and the first post is a must read. It is titled Evolving BI and Bryan shares his latest adventures, his thoughts on the current state of BI as he has observed and experienced it, and where he feels we collectively need to go.

Bryan has also migrated all of his old MicroStrategy Blog posts to this blog site and I encourage you to read these as well.

Here is a link to the Evolving BI post.

Best Regards,

Michael

 

Interview Question #3: Converting a Consolidation to a Custom Group

Question

Can you always convert a consolidation to a custom group and vice versa?

Answer

You can always convert a consolidation to a custom group, but you can only convert a custom group to a consolidation when the custom group filters are from the same table or very similar tables.

Dr. Dobbs Journal: The Corruption of Agile (The Agile Holocracy) – Part 3 of 3

Readers:

In my last post, I included a recent interview Andrew Binstock, Editor in Chief of Dr. Dobbs Journal, had with Dave Thomas about the corruption of Agile and how many people are misunderstanding what the Agile Manifesto really said.

Allen HolubToday, I am continuing with the third and final part of my series titled The Corruption of Agile by highlighting thoughts from one of the key thinkers in the area of agile, Allen Holub. Allen is a consultant, trainer, and speaker, specializing in Lean/Agile process infusion, Agile/OO architecture, and cloud-based web-application development. He’s worn every hat from grunt programmer to CTO, and has written hundreds of magazine articles for various technical publications and a dozen books, including Holub on Patterns: Learning Design Patterns by Looking at Code. He is a regular speaker at various conferences, worldwide.

In a recent blog entry in Dr. Dobbs Journal titled The Agile Holocracy, Allen noted that agile is a culture, not a set of practices. It is upper management’s job to establish that culture, and then let it work. A holocratic organization is one of the better ways to do that.

Mr. Holub notes that the word holocracy popped up in a recent Dilbert cartoon, used incorrectly as a synonym for both a flat organizational structure and, as the pointy haired boss says, “wander[ing] around.” Nonetheless, full-on agile organizations have many of the characteristics of a holocracy, so Allen thought it was worth exploring the concept.

The term holocracy was coined by Artur Koestler in his 1967 book, The Ghost in the Machine, which is actually a discussion of the human brain. Koestler’s thinking applies to social organizations as well, however. A holocracy is made up of autonomous, self-reliant units called holons (from the Greek word for “whole”). The holons are simultaneously independent from and dependent on the organization that they comprise. That is, a holocracy is simultaneously a whole and its parts.

Kosetler uses our own brains as an example. The cerebellum (the “reasoning” brain) is largely independent on the amygdala (the so-called “lizard brain”). These two sub-brains operate independently under their own rules, but interact in a way that allows the whole system to behave both coherently and in a way that you couldn’t predict from looking at a single subsystem. This is a kind of emergence; the coherent behavior that arises from the complex interactions of the rules and strategies of the independent holons is more sophisticated than behavior of the parts. The ghost is the thing that you can’t predict just by looking at the parts.

Koestler also points out that when one subsystem dominates, the whole system can become pathological. When your lizard brain hears a loud noise and says “panic!,” your cerebellum counters with, “no, that’s just an electric guitar and a gazillion-watt amp” and overrides it. The synthesis is a feeling of excitement that the concert’s starting. If the lizard brain wins, everybody in the stadium gets trampled.

The notion of holocracy works well in agile organizations. In fact, I’d say that some level of holocratic structure is essential. Independent units — teams or individuals — are responsible for specific organizational roles, and interact with one another in a synergistic (to use an overused word) way. A person is holon, and so is a team. Each holon is both independent and part of the whole. Each holon is responsible for itself, but is also responsible for the success of the whole organization. The holons carry out well-defined roles, and are responsible for doing whatever they have to do to succeed in those roles. Nobody orders anybody around. The teams and the people are responsible for defining and executing their own tasks.

Getting back to Dilbert, holocratic organizations do have a flat hierarchy. They are peer-to-peer networks, not trees, and there’s no place for any sort of “middle management.” The executives are the organization’s lizard brain: They set the direction and support the broad culture. The teams are the cerebellum: Thinking out the details and getting the processes to work. The synthesis of those roles yields a functional agile organization. If either holon is dominant (that is, if agile practice exists without a supporting culture, or if the executives perpetuate command-and-control, or if the teams get to do everything they want without regard to budget or scope), everybody gets trampled.

There isn’t even a “CEO” in the captain-of-the-ship sense. Instead, the holons (both people and teams) communicate with each other in order to tune what they’re doing. In the same way that people talk to each other, teams interact horizontally. For example, teams can “inject” members into other teams. Spotify, for example, has a “guild” system. All the database folks from all of the teams get together occasionally for Database-Guild meetings, and so forth. Think of this arrangement as a database-guild holon that injects members into the development-team holons.

Holocratic organizations aren’t fantasies. Probably the most well-known is W.L. Gore, Inc., as described in Malcolm Gladwell’s Tipping Point. There are no formal titles and no hierarchical structure. People have “sponsors,” but nobody reports to anybody.

The main objections to the viability of a holocracy seem to come from people who confuse holocracy with anarchy — everybody doing whatever they want. That’s not what I’m describing. A holocracy is highly disciplined. You take your responsibilities seriously, and work hard both to be effective and to make everybody else successful. Communication and collaboration are central to both functions, so everybody does that.

Similarly, holocracy is not Communism. The people who contribute more are rewarded accordingly. However, compensation has to be transparent and rational. Secrets undermine the collaborative nature of the system.

Dysfunction happens when one holon becomes dominant. An out-of-control “executive” holon, for example, can destroy the agility of the organization. I’ve seen teams, departments, and whole companies brought to their knees by pathological “leaders” intent on dominating, who think that somebody has to “run” things. The mandates that come from these people do nothing but add dysfunction. I’m not talking about micromanagement, which is also a sort of disease. Dysfunction happens any time somebody can’t do something because they need permission, or won’t do something because they’re afraid of some repercussion. The holons (teams and individuals) must be empowered to fulfill their roles.

Of course, there’s no reason for some of the holons not to be tasked with the health of others. For example, one responsibility of the “executive” holon is to assure that company has an effective internal culture. If a team becomes dysfunctional, the “executive” holon’s job is to help the team heal, not by ordering people around, but by providing training, mentorship, etc. If that doesn’t work, though, the “executive” holon could act like an antibody. The immune system is part of a healthy brain. The core value in a holocracy is trust. Hierarchical command/control organizations are built on mistrust. They assume that nobody beneath you in the hierachy is truly self sufficient and competent, so they impose paternalistic governance practices that force these assumed-incompetent players to behave. A holocracy trusts everybody to do their work in the best way possible.

Since trust and assumed competence are also firmly ingrained in the Agile Principles, there’s a good match here. Everybody’s got to be a grownup, though. I once heard Aliestar Cockburn answer the criticism “your methods won’t work in the real world because they require nothing but A players” by saying that making good software always requires A players, regardless of your process. I agree.

As a final note, Holacracy.org adapted Kosteler’s notions into a social/organizational milieu, and serves as an interesting example of another functional variant of an holocracy. Everybody in the Holacracy.org is literally (in the legal sense) a partner. There are no bosses, and there is no CEO. I’m not a huge fan of Holocracy.org’s formalization, but they’re prominent enough to deserve mention. For one thing, I’m offended by the way that Holacracy.org has trademarked Kosetler’s term. The trademark isn’t enforceable, given prior use, but they have no right to set themselves up as the arbiter of somebody else’s idea. It’s like somebody trademarking “agile.” Moreover, I don’t particularly like their fixed rules (documented in a Constitution), which seems to me to fly in the face of the core principle of self organization.

I’ve said this before, but agile is a culture, not a set of practices. It is upper management’s job to establish that culture, and then let it work. A holocratic organization is one of the better ways to do that.

———————————————————————————–

Source: Allen Holub, The Agile Holocracy, Dr. Dobbs Journal, March 14, 2014, http://www.drdobbs.com/architecture-and-design/the-agile-holocracy/240166629.

 

Dr. Dobbs Journal: The Corruption of Agile (or What the Agile Manifesto Really Said) – Part 2 of 3

Readers:

In my last post, I included a recent article from Andrew Binstock, who is the Editor in Chief of Dr. Dobbs Journal. Mr. Binstock expressed his concerns about how we are corrupting Agile.

Dave ThomasToday, I am continuing with Part 2 of my The Corruption of Agile series by highlighting thoughts from one of the founding authors of the Agile Manifesto, Dave Thomas. Dave (photo, right) is a coauthor of the legendary Pragmatic Programmer book, one of the authors of the Agile Manifesto, a publisher, speaker, and the man who brought Ruby to the masses.

In a recent interview with Mr. Binstock, Dave discussed what he feels is wrong with Agile. But, before we get started, let’s review what the Agile Manifesto really said. This is verbatim from the Agile Manifesto website.

 

Manifesto for Agile Software Development

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more. [2]

The Corruption of Agile – Andrew Binstock’s Interview with Dave Thomas [1]

AB: Let me move to the Agile Manifesto. You were one of the original signatories…
DT: Yes, yes. There were 17 of us that met in Snowbird in Utah in February 2001; and we came up with the four values and the common ground and practices. The way we came up with it was not something I’d ever experienced before. There were 17 people and everyone had their own ax to grind and their own particular viewpoint on what was best. And within a couple of hours, we’d come up with a way of working that let us discuss the principles behind what we were doing without going into the details.So the framework itself, the actual four values, came out in half a day. It just kind of came. I think everyone kind of knew what they wanted to say. I think it was actually Martin Fowler and I — everyone else was having lunch — we were kind of sitting at a whiteboard just chatting through things and we came up with the “we value A over B” formulation, which as it turns out, has worked out really well.
AB: I think that perhaps a limitation is that a lot of people have interpreted it as “no B whatsoever.”DT: Indeed! And, in fact, we recognized that might happen at the time, and we put in the little phrase at the bottom: “whereas we value B, we value A more.” So, we thought that would happen and we tried to stop it, but, of course, we…AB: …can never control…

DT: Yes, that was the depressing thing for me. I came away from Snowbird feeling really happy. And within a month, I realize it wasn’t going to work. And I have not been involved in anything even called “Agile” until last month.
AB: Last month?
DT: That’s right, last month. I have not gone to any Agile conferences. I have not joined the Agile Alliance.
AB: Why is that?
DT: Because it got immediately productized in many different ways. The whole point, to my mind, of the Agile Manifesto is that it’s a set of personal practices that may scale to team level. You do not need a consultant to show you how to do that. It may help to have someone facilitate, but you do not need a consultant. And yet immediately what happened was that everyone and their dog hung out an Agile shingle and the whole thing turned into a branding exercise.
AB: Yes.
DT: And so now in the software world, if you’re selling something and it doesn’t have the word “Agile” somewhere in the title, it’s not good. And I hate that because we had something that was…it was my best attempt at the time to present a set of free-standing, self-contained values that could be applied by all. And now it’s just become mush. It’s become devalued to the point where it doesn’t mean anything.
DT: I also have a pure English-language abhorrence to the people who use Agile as a noun. As in “I do Agile.” No you don’t. It’s like saying “I do orange.”
AB: When you were meeting at Snowbird, was there a feeling there that you were doing something important that was going to change software?DT: Good question.AB: The fact that you chose the term “manifesto” suggests that you were planting a flag.DT: There was a lot of discussion at the time about what it should be called. And, to be honest, I can’t remember to what extent we were thinking of posterity when we came up with the word “manifesto.” Even the word Agile was debated a lot.

AB: That’s what I’ve heard.

DT: I think we discussed “lightweight.” There was some sense that this was going to be bigger than the 17 of us because we agreed on the last day that Ward [Cunningham] was to go create a site that would allow people to sign up. Ward is someone who is always building community. And I think we all went along thinking that would be a really good way to spread the word. But the reality was that I don’t think any of us were particularly marketing people. The fact that it took off as quickly as it did was that we just happened to say the right thing at the right time. We didn’t actively push it particularly, it just kind of pushed itself or actually pulled itself.

Next: Allen Holub and The Agile Holocracy

Bonus Section: The 12 Principles behind the Agile Manifesto

For you folks that have never read the actual Agile Manifesto, I have provided a copy of its guiding principles here for your review.

We follow these principles:

Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.

Welcome changing requirements, even late in
development. Agile processes harness change for
the customer’s competitive advantage.

Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.

Business people and developers must work
together daily throughout the project.

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.

The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.

Working software is the primary measure of progress.

Agile processes promote sustainable development.
The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.

Continuous attention to technical excellence
and good design enhances agility.

Simplicity–the art of maximizing the amount
of work not done–is essential.

The best architectures, requirements, and designs
emerge from self-organizing teams.

At regular intervals, the team reflects on how
to become more effective, then tunes and adjusts
its behavior accordingly. [2]

———————————————————————————–

Sources:

[1] Andrew Binstock, Dave Thomas Interview: The Corruption of Agile; Ruby and Elixir; Katas and More, Dr. Dobbs Journal, March 18, 2014, http://www.drdobbs.com/architecture-and-design/dave-thomas-interview-the-corruption-of/240166688?pgno=3.

[2] Kent Beck et al, Manifesto for Agile Software Development, The Lodge at Snowbird Ski Resort, Utah, February 11-13, 2001, http://agilemanifesto.org/.

 

 

Follow

Get every new post delivered to your Inbox.

Join 49 other followers