Chicago Murder Rate

I’ve seen a lot of articles talking about Chicago’s high murder rate recently, and I’m going to complain about them. More constructively, I’ll post some graphs.

Most recently, the articles were triggered by the very high rate of murders in January 2013. Before that, they were triggered by the round number of 500 murders in 2012. Annoyed at those articles, I went to google, which found me an article on the high rate of murder 10/2011-3/2012. That third topic is a serious topic, but I don’t know why there was a crime wave then. Maybe the mild winter? But the other two were non-events. Monthly homicide rates vary by huge amounts and don’t mean anything. There were very few in December 2012, followed by very many in January 2013. So what? Despite the low (but meaningless) rate of murders in December 2012, reporters were writing about the high rate, because 2012 reached the round number of 500. That was 15% higher than in 2011, but the difference was entirely due to the very high rate early in the year, so the story was over. The first half of the year was a big deal, but reporters were not talking about that, but about the false claim that 2012 had been uniformly violent.

Also, many of these stories compared Chicago to New York. New York is the exception, not Chicago. Yes, Chicago could learn something from New York, but so could all American cities. Chicago is doing worse than the national trend in the past 5 years. But I expect that half of cities are doing worse and half better.

Before I get to the graphs, what did I learn?

  • monthly murder rates are noisy
  • murders are seasonal, occurring in the summer
  • loess won’t automatically detect seasonal trends on multi-year time series; more generally, I need to understand it better.

I got Chicago homicide data from a journalism project to map homicides based on a similar LA project. They get their data from the weekly police blotter. It differs from the final police figures by 2% in most years, in both direction, but the 2010 final number was 4% higher. I clicked on the google spreadsheet links to download, then ran this R code

library(plyr)
library(ggplot2)
library(Hmisc)
library(lubridate)

all <- mutate(all, date=as.Date(mdy(Date)), year=year(date), month=month(date))
all <- subset(all, month<2|year<2013)

summary<-rename( ddply(all,.(month,year),nrow), c("V1"="murders") )
summary<-mutate(summary, month2=month+12*(year-2007),
   rate=murders/monthDays(as.Date(paste(summary$year,summary$month,1,sep="-"))) )

qplot(month,rate,color=factor(year),data=summary)+geom_smooth(se=F)
qplot(month2,rate,color=year,data=summary)+geom_smooth()

The first graph shows the seasonal pattern of murders and compares different years.

The second graph shows the long term trend.

What did I learn from this?
From the first graph, there is a seasonal trend.
Second, from either graph, there is a lot of noise.
Third, the second half of 2012 is typical, while the first half is very bad, though I already deduced that from the numbers in the papers.

The loess on the whole time series did not notice the seasonal trends. It is probably right to ignore a such a high frequency effect (period 12 on the monthly discretization). So I tried graphing number of murders per day. That is very noisy, so I’m only going to show curves, not scatterplots. I didn’t know how to get ddply to do this, so I used a for loop. Also, R’s : operator doesn’t work well with dates.

library(plyr)
library(ggplot2)
library(lubridate)

all <- read.csv("all.csv")
all <- mutate(all,date=as.Date(mdy(Date)),year=year(date) )

start<-as.Date("2007-01-01")
end  <-as.Date("2013-01-31")
summary <- data.frame()
for(i in 0:(end-start)) {
  d <- start+i
  df <- data.frame(date=d, murders=nrow(subset(all, date==d)))
  summary <- rbind(summary,df)
  }
summary <- mutate(summary, year=year(date), day=yday(date))

qplot(date,murders,data=summary,geom="smooth",method=loess)
qplot(yday(date),murders,data=subset(summary,year<2013),geom="smooth",color=factor(year),se=F)

The first graph, of the six year trend, doesn’t look any different than the one based on monthly discretization, so I’m not posting it. The second is only slightly different from the one based on the monthly discretization. The most obvious difference I see is that in some years, the discretization broadens the peak across two months.

So one lesson is that monthly discretization is not so bad, for looking at the data a year at a time.

The second lesson is that I don’t understand loess. I was hoping that with the daily data, the global loess would pick up the seasonal cycle, but it doesn’t. It gives more weight to distant observations than I had thought. I should learn how it works and what options there are for tweaking it. Probably I can force it to be more local. By doing loess one year at a time, I’m throwing out information at the year breaks, yielding wide standard errors at the ends (visible in other graphs). I tried moving the year breaks, but the results weren’t that interesting.

Advertisements

The Wave Function is not a function

This is a subtle error in quantum mechanics that occasionally has consequences, but usually does not. I started to write this 11/2011, but it just trailed off.

According to von Neumann, a quantum system is a Hilbert space of states, a unitary time flow of the Hilbert space, and a (von Neumann) algebra of observables acting on the Hilbert space. Typically, people build quantum systems through canonical quantization, starting with a classical system, that is a symplectic manifold and a hamiltonian function on it. (the physics term “canonical” is roughly equivalent to the math term “symplectic”) Using the Moyal star or other explicit method, one forms a one parameter deformation of the commutative algebra of functions on the symplectic manifold. The parameter is called h-bar and eventually one fixes its value to be the observed physical value. I’m not sure how one forms the Hilbert space in general. In the specific case in which the symplectic manifold is the cotangent bundle of another manifold called space, then the Hilbert space is the functions on the smaller manifold and the algebra are the (pseudo-)differential operators, which may be thought of as functions on the symplectic manifold by the symbol.

Thinking of our observables as functions on the phase space or of the states as functions on space gives them more structure than von Neumann allows and can lead to error.

In particular, in the many worlds interpretation, people often say that the squared amplitude of the wave function at a particular point is the degree of reality of that world. But the wave function is not a function and thus one cannot ask about its amplitude at a particular point. At least, one cannot do so without imposing additional structure and the answer depends on that extra structure. (eg, one could express the symplectic manifold as the cotangent bundle of a different manifold, such as the graph of a 1-form) When h=0, the algebra of observables knows about the points of the symplectic manifold and there is no ambiguity. Maybe the ambiguity when h is not zero can be controlled by h. We can’t evaluate the wave function at a point, but maybe we can evaluate it at an h-fuzzy point. The uncertainty principle is relevant.

Amazon’s price check app

Almost all the coverage, and certainly all the anger about Amazon’s price check app are about features that the app have provided for years, namely taking a product in the store and finding out Amazon’s price. This encourages people to free-ride on the retail experience. But people are only complaining now because Amazon is promoting it, in particular paying people to use it.

But there is another aspect of the app which is new, pointed about by Malcom [sic] Digest and Tony in the comments at Marginal Revolution. Tyler Cowen uses the word “report” in his link, copied from the subtitle of the Guardian article, but the article does not discuss its significance. The change is that the promotion requires the user to tell Amazon the price in the store. As MD puts it:

Isn’t the whole point of the Amazon Price Check app to allow Amazon.com to adjust prices accordingly? When Amazon is setting the prices for you, it’s not like they need to compete with every store in the country, just the ones in your neighborhood.

Rahul:

The real suckers were the price-checking consumers; Amazon’s eating into the consumers surplus not into the High Street book seller sales.

Of course, they are also getting people to use the old price check functionality and cutting into the bricks&mortar sales, too.

Is Ticketmaster a natural monopoly?

Ticketmaster’s high fees appear to be monopoly rents, but what are the barriers to entry? Is there a network effect? The classic example of a natural monopoly is an auction site, like eBay. The buyers have to go there, because the sellers are there and vice versa. Ticketmaster has exclusive contracts with a lot of venues, so the audience and the band have to go to Ticketmaster. But what’s in it for the venues? If they went somewhere else, the bands and audience would follow. It appears to me that the situation has changed over the years. In the 80s and 90s, it was a natural monopoly. More recently, it was quite vulnerable to competition. It may have become a natural monopoly again.

In the 80s and 90s, Ticketmaster provided its phone service and physical box offices and machines. This was a big barrier to entry with gains from scale, so it was a natural monopoly. At least it was a natural monopoly in each city and there were different companies in different places. Ticketmaster bought them all up. This probably has some gains from scaling the phone system, but otherwise little immediate effect. I don’t see how that could reduce competition, since they were already local monopolies, though having fewer companies probably reduced innovation. I think this is the point that people began being outraged by their prices, so maybe the uniform geographic monopoly was useful to them. Or maybe Ticketmaster was the only company to identify and exploit the value of the monopoly.

More recently, ticket buying has shifted to the web. Web sites are a pretty low barrier to entry. Any venue can set up its own web site and take credit cards. So why do they stick with Ticketmaster and its 40% fees? In 2006, the LA Times seemed to say that Ticketmaster was quite vulnerable to competition. The key quote is

People close to Ticketmaster say that other concert companies have made similar comments about the ticketing company, only to sign new Ticketmaster deals once they got the terms and upfront payments they demanded.

If Ticketmaster is a to be believed, the competition cut into its bottom line. But it didn’t lower prices; instead it kicked the money back to the venues in the form of up-front payments. The venues have a lot to gain from competition, but I wonder how many of them noticed that? By keeping the prices fixed and structuring the discount as kickbacks, other venues might not notice what is going on. They see that, say, the 76ers switched away and came back and conclude that they should stick with Ticketmaster, when in fact switching or threatening to switch could get them a lot of money. Another reason to structure it this way is to hide it from the bands. Ticketmaster functions as a bogeyman to let the venues raise prices without giving the money to the band. Maybe there’s some game-theoretic commitment, too. The article also says that half of the fees attributed Ticketmaster were being passed on to the promoter, Live Nation. So even if direct competition wasn’t causing them problems, someone else was able to extract the rents.

More recently, Live Nation merged with Ticketmaster. That could recreate a natural monopoly. Live Nation promotes a lot of big acts, like Madonna. They probably prefer Ticketmaster venues. Thus it is harder for venues to leave Ticketmaster. Or maybe they were already able to demand monopoly rents on their own and Ticketmaster is just a side-show.

(Why do I believe the Ticketmaster quote in the LA Times article? If, as I claim, they want to keep these deals quiet, they shouldn’t mention them to the reporter. But I can’t see what good it would do to make up this particular statement, if it is false. It seems to me if they were going to make up something, that they should say that venues always come back to them and give a vague, positive explanation, perhaps about how they provide a valuable service. So I believe them because it makes them look weak. People usually lie to make themselves look strong.)

I found the LA Times article from a Steve Sailer post. He is not so interested in the economics of Ticketmaster as the psychology of public perception of monopoly and power in general. Also, a post on the economics of scalping.

short sales

Robin Hanson describes a tax on short sales as a ban on bad news. Many commenters complain that there is manipulation of stock prices by shorters. I’m sure that there is a lot of that, but probably more manipulation of stock prices upwards. Why do people have asymetric views?

Prosecution for insider trading is asymetric, too. It seems to me that there is a bias that short action is more suspicious and should be subject to more scrutiny. Are the regulators subject to this bias, or just pandering to the crowd?

Reported (legal) trading by insiders in the US reflects information when it is long, but not when it is short, probably because the sales will attract more attention. Yet in places where insider trading is legal (eg, Hong Kong) reflect information on sales but not on purchases. That’s bizarre? why don’t the purchases reflect information? because the executives are overoptimistic? but why not in the US? because the US executives have such trouble selling, buying must be a big commitment, deserving more thought?