Wednesday, February 29, 2012

Day 2 @ Strata 2012

The day went by very fast. However not a lot of interesting topics though. The keynote talks by Ben Goldacre and Avinash Kaushik were all right. Then the Netflix one was interesting, the speaker talked about what (quite a lot of) other things Netflix does beyond predicting ratings. The 'science of data visualization' was informative too.

One interesting observation I made today was that during the break hour, the man's room had lines, while the ladies' did not. That's totally different from other places I have been to, for example, the shopping mall. :P

Tuesday, February 28, 2012

Go Strata! Go DATA!

Today I finally walked in the Strata Conference for Data (and thank God that I live in California now.) I was quite excited about this, because there are tons going on in this conference. And people won't think you are a nerd, when you express your passion on ... DATA. Well, in my mind, the entire universe is a big dynamic information system. And what's floating inside the system? Of course, the data! And knowing more about data essentially helps people understand the system better, the universe better! It's so importance that it will become bigger and bigger part of your life. And maybe someday people will think data as vital as water and air :)

Anyway, today is the training day of Strata. I chose the 'Deep Data' track. The speakers were all fantastic! It's a great opportunity to see what others actually do with data and how they do it, instead of the tutorial sections where people just talk about the data. The talks I enjoyed the most are Claudia Perlich's 'From knowing what to understanding why' (she really has no holdout on the practical data mining tips. And I like the fact that she baked a lot of statistics knowledge into problem solving, which in my mind is missing on some of the data scientists. And I really like the assertive attitude when she said 'I will even look at the data, if somebody else pulled it'.), Ben Gimpert's 'The importance of importance: introduction to feature selection' (well, I always like these type of high level summary talks.), and, Matt Biddulph's 'Social network analysis isn't just for people' (the example that most impressed me is he used the fact that developers often listen to music while they write their code, so there is a connection between music and the programing language. Something that seems totally unrelated got brought into the wok and cooked together. Besides, he had some cool visualization using Gephi.)

At the end of day, there is an hour long debate between leading data scientist in the field (most of them came or come from Linkedin). The topic was 'Does domain expertise matters more than machine learning expertise?', meaning when you trying to assemble a team and make hire, do you have the machine learning guy or the domain expert? I personally vote against the statement, and I think the machine learning expertise matters more when I try to make the first hire. Think about it this way: when you have such an opening, you, the company should at least have idea about what you trying to solve (unless you are starting a machine learning consulting company, in which case the first hire better be machine learning people). So at that time, you already have some business domain experts inside your company. Then bringing in data miners will help you solve the problem that a domain expert couldn't solve. For example, your in-house domain expert could complain about data not very accessible, or too many predictors they don't know how and which one to look at. A machine learning person hopefully could provide advice on data storage, data processing, and modeling knowledge to help you sort out the data into some workable format, and systematically tell you that you are spending too much time on the features that do not make any difference and some other features should get more of your attention. To me, it's always an interactive feedback system between your data person and your domain expertise. And it's the way of thinking about business problems systematically in an approachable and organized fashion that values the most, not necessarily how many models or techniques that machine learning candidates knows.

Overall, Strata is a well-organized conference, that I want to attend every year!

Monday, February 13, 2012

Funnel plot, bar plot and R

I just finished my Omniture SiteCatalyst training in Mclean, VA a few days ago. It was ok (somehow boring), we only went through how to click buttons inside SiteCatalyst to generate reports, not necessarily how to implement it and let it track the information we want to track.

I got two impressions out of the class: one is Omniture is great and powerful web analytical tool; another is the funnel plots could be misleading from data visualization perspective. For example, regardless of why the second event 'Reg Form Viewed' has higher frequency than first event 'Web Landing Viewed', the funnel bar for second event is still narrower than the one for first event. Just because it's designed to be the second stage in the funnel report.

This is a typical example of visualization components do not match up the numbers. There could be other types of funnel plots that are misleading as well, as pointed out by Jon Peltier in his blog article. I totally agree with him on using the simple barplot to be an alternative for the funnel plots. And I also like his idea of adding another plot for visualizing some small yet important metric, like purchases as shown in his example.

Then I turned into R to see if I can do some quick poking around on how to display the misleading funnel I have here into something meaningful and hopefully beautiful. Since I always feel like I don't have a good grasp on how to do barplots in R, this is going to be a good exercise for me.

As always, figuring out the 3-letters parameters for base package plot function is painful. And I had to set up appropriate size of margins, so that my category names won't be cut off.

Then I drew the same plot using ggplot2. All the command names make sense. And the plot is built up layer by layer. However, I did not manage to get the x-axis to the top of the plot, which will involve creating new customized geom.

There are some nice R barchart tips on the web, for example on learning_r, stackoverflow, and gglot site. Anyway, this is what I used

##### barchart

dd = data.frame(cbind(234, 334, 82, 208, 68))
colnames(dd) = c('web_landing_viewed', 'reg_form_viewed', 'registration_complete', 'download_viewed', 'download_clicked')
dd_pct = round(unlist(c(1, dd[,2:5]/dd[,1:4]))*100, digits=0)

# plain barchart horizontal
#control outside margin so the text could be equeezed into the plot
#las directions of tick labels for x-y axis, range 0-3, so 4 combinations
mp<-barplot(as.matrix(rev(dd)), horiz=T, col='gray70', las=1, xaxt='n');
tot = paste(rev(dd_pct), '%');
# add percentage numbers
text(rev(dd)+17, mp, format(tot), xpd=T, col='blue', cex=.65)
# axis on top(side=3),'at' ticks location, las: parallel or pertanculiar to axis
axis(side=3,at=seq(from=0, to=30, by=5)*10, las=0)

# with ggplot2
dd2=data.frame(metric=c('web_landing_viewed', 'reg_form_viewed', 'registration_complete', 'download_viewed', 'download_clicked'), value=c(234, 334, 82, 208, 68))

ggplot(dd2, aes(metric, value)) + geom_bar(stat='identity', fill=I('grey50')) + coord_flip() + ylab('') + xlab('') + geom_errorbar(aes(ymin = value+10, ymax = value+10), size = 1) + geom_text(aes(y = value+20, label = paste(dd_pct, '%', sep=' ')), vjust = 0.5, size = 3.5)