Sunday, 26 May 2013

21/05/13: Ontario, Canada boasts a city called London situated on a river called the Thames

The attendees
1) The statistician
2) The doctor
3) The programmer
4) The Turk
5) The anthropologist

The ones that got away
1) Who had an autobiography titled Moonwalk?
2) In what year was In Bruges released?
3) What river runs through Peterborough?
4) Identify the film from this quote: "Guns don't kill people, postal workers do."
5) Identify the London station from this cryptic clue: "There's no-one home in this mountainous entranceway"

The answers


New! How did you do? Let the world know with the form on the right.

Update: Poll closed, so it's time for results! 6 votes (5 more than I was expecting) with 1 perfect score, 2 scoring 2/5, 2 scoring 1/5, and 1 joining us on zero.

The average reader scored 1 out of 5.

The excuses
1) A pretty brutal one to start with this week. Amusingly, and as is so often the case, we spent a long time debating which of two options to go for, both of which ultimately proving to be incorrect. Having dismissed Michael Jackson as 'too obvious' (a mistake we have made before in this quiz), our attention focused on whether it was more likely to be Neil Armstrong or Buzz Aldrin, ultimately opting for the former. I have since discovered that Armstrong never wrote an autobiography, while his biography is called First Man and that there is rather more of a history to his famous missing 'a' than I realized.

2) The crapshoot that is 'guess the year' hurts us again. (Although I should take some flak for not properly hearing out the Turk who had the correct answer just as we were about to hand over answer sheets for scoring.)

3) Pro-tip: never let your quizmaster know your weaknesses. In our case the penalty has been a near-weekly dose of British geography questions designed to trip us up. We can't really complain, everyone gets the same questions and all that, I just wish he'd chosen an area which would be worth us brushing up on: I don't expect many pub quiz questions about Peterborough when we move to Canada in two months. Oh wait.

4) We went with the admittedly rather optimistic The Postman and then had a fun discussion of the origins of the phrase 'going postal'.

5) In the last of a relatively small set of ones that got away - which still only saw us finish joint third - a fairly typical cryptic Tube station. At first I was a touch non-plussed by 'mountainous' clueing to 'hill', but this really is quite gettable as soon as you spot the 'not in' component, so I can't really complain. (By comparison, a couple of weeks ago we solved "Dark ninjas at the chicken shop": even the quizmaster couldn't tell us where the shinobi came into it.)

Sunday, 19 May 2013

14/05/13: the shoe brand Hush Puppies take their name from a dish of fried cornballs

That's right: pictures! See questions 9 and 10!
The attendees
1) The statistician
2) The doctor
3) The programmer

The ones that got away
1) In the story 101 Dalmatians, how many of the dalmatians were puppies?
2) In what year was Schindler's List released?
3) Ditloid: 1 H O A C
4) Identify the film from this quote: "I respect women! I respect them so much that I completely stay away from them!"
5) Identify the film from this quote: "I don't want to be a product of my environment. I want my environment to be a product of me."
6) Which tropic - Cancer or Capricorn - passes through Brazil?
7) The RAF squadron number 617 is better known as what?
8) Identify the London station from this cryptic clue: "The heaviest Albion of the midlands"
9) Identify company logo A in the picture. (Clue: "something you eat".)
10) Identify company logo B in the picture. (Clue: "something you eat".)

The answers


The excuses
1) Spectacularly frustrating. I anticipated that the quizmaster meant the Disney film, rather than the book, and so I asked him for clarification. "I'm asking about the story by Dodie Smith, but I don't think it matters, does it?". Uh-oh. We put down 97 knowing it was the correct answer, but as we suspected the answer given was 99 (the number of puppies in the film). Cue a lengthy 'discussion' with the quizmaster about it, who eventually said "I guess I'd better go and check this myself, hadn't I?". It seems he forgot to, however, and so what would have been a two-point swing for us against most teams went begging. We didn't force the issue - it's a charity quiz with little at stake beyond pride - but still a pretty frustrating outcome. Oh, and in case you're wondering, the adults in the book are Pongo, Missis, Perdita and Prince (who I distinctly remembered appearing as the 101st Dalmatian on the very last page).

2) Zagged the wrong way with 1991 (for some reason I was pretty sure it was an odd year in the early 90s thanks to my very occasional attempts to learn the Best Picture Oscar winners).

3) I'm starting to wonder if these 'ditloids starting with 1' are just becoming a weekly attempt by the quizmaster to troll us. We came up with at least half a dozen plausible suggestions, as did most other teams by the sound of it, but our eventual pick of '1 hook on a coathanger' fell on unsympathetic ears.

4)-5) Fortunately none of us had seen either film, so our traditional film quote fail was fractionally less frustrating than normal. We had, however, seen Infernal Affairs, the Hong Kong thriller upon which The Departed is based. (It's pretty good, too.)

6) An embarrassing miss, and one that was entirely my fault for being inexplicably sure of the (wrong) answer. The question obviously just amounts to "which one is below the equator?" and at least now is something I won't forget in a hurry. Nothing like an excruciating quizzing error to imprint something on the memory. (See also: my thinking Sachin Tendulkar once took 10 wickets when I was on University Challenge.) If you need an aide memoire for this, a friend has furnished me with this beauty: "Cancer is near Canada. Like, can-can, geddit?".

7) A good overthink on this one: we all thought Dambusters immediately (which, given the highly publicized anniversary of the mission, we really should have known for sure), but then someone suggested it was the Red Arrows, which seemed to make much more sense. Unfortunately they're merely known as the Royal Air Force Aerobatic Team.

8) Having done this quiz near-weekly for around 9 months now we're finally getting the hang of our quizmaster's particular 'style' of cryptic clue. Here we had all the components: Albion of the Midlands can only mean the area most famous for its football team, and while it doesn't really make sense we could see how 'heaviest' could clue to the common suffix '-ton'. As is often the case, however, we'd never heard of the answer, so putting the parts together proved just beyond us.

9)-10) In an exciting new feature of The Ones That Got Away, I'll now include picture clues we missed. We went with Fox's Glacier Mints for the first (who at least have a polar bear on their logo) and, with the rather tenuous Prince of Wales link, Duchy Originals for the second (not even close).

Thursday, 16 May 2013

Bonus Question
Predicting Eurovision finalists: would you beat a monkey?

If this isn't the sort of monkey who would
like Eurovision, I don't know what is
For many, Eurovision remains a one-night stand with all the sequins Europe has to offer. The hardcore, meanwhile, know different. Saturday's final will always be the main course of this annual European feast, but the preceding week with its two semi-finals, serves as a delicious starter (complete with the odd really weird canapé that nobody likes).

What's more, with no British interest to worry about, UK fans can sit back and enjoy the delightfully difficult game of guessing who will make it to the main event. After the first semi-final on Tuesday social media (well, the media I socialize with, at any rate) was awash with people proudly declaring how many of the 10 finalists they correctly predicted, with 7 out of 10 widely considered a 'good' performance. But is it really?

On face value 70% does sound pretty respectable: that's a first class degree at most universities, after all. Reframing the problem, however, can change our perspective. Predicting who will go through is equivalent to predicting who will crash out, and with only 16 countries competing on Tuesday, spotting 7 out of 10 qualifiers is the same as getting just 3 of 6 losers correct. All of a sudden things aren't quite so impressive.

Score Monkey %
10 0.01
9+ 0.8
8+ 9
7+ 39
6+ 78
5+ 97
4+ 100
I thought I'd put my lifetime of statistical training to good use and see how well someone picking at random (a monkey being the traditional example) would do at predicting Eurovision finalists. I've written up the mathematical details over on my stats blog, in case that's your thing, but most of you are probably just interested in the final numbers. The table on the right, then, summarizes how well my hypothetical Eurovision loving monkeys would have fared at predicting Tuesday's success stories. The Monkey % indicates how many monkeys would score at least that well (so you may notice that 100% of monkeys would manage at least 4 out of 10 - the lowest possible score).

Overall, the monkeys would get on surprisingly well. Around 40% of them would, for example, manage that 7 out of 10 many people seemed quite proud of. Even my own performance of 9 out of 10, which I was really rather happy with (despite my copy book being blotted by the Netherlands, of all things), only puts me in the top 1% of monkeys. Not so impressive after all.

Score Monkey %
10 0.005
9+ 0.4
8+ 5
7+ 27
6+ 65
5+ 92
4+ 99
3+ 100
The semi-final fun isn't over, of course, so what would be a good score for Thursdays's second heat? With 17 countries to consider, things are slightly different, and this table summarizes our updated monkey performances.

Unsurprisingly, with an extra country in contention, prediction gets a *lot* harder. As such, I think 7 out of 10, placing you in the top quarter of monkeys, could be considered a reasonable performance. 8 out of 10 would get a statistician interested, while 9 or 10 would suggest you should spend less time reading this and more time at the bookies (or maybe just outdoors). Or you could just sit back, relax, and succumb to Eurosong Fever.

Sunday, 12 May 2013

07/05/13: A study of groundhog weather predictions found they do not fare significantly better than chance

The attendees
1) The doctor
2) The programmer

The ones that got away
1) In what year was Groundhog Day released?
2) Which common colour is most similar to the colour Damask?
3) How many paintings did Vincent van Gogh sell in his own lifetime?
4) In which 1993 film did Harrison Ford play Dr. Richard Kimble?
5) Identify the film from this quote: "Now you've got about ten seconds before those guys see you, and when they do they will kill you, you understand? You are about to have a very bad day."
6) The lyrics "And it's just like the ocean under the moon, Well that's the same as the emotion that I get from you" are from which song?
7) Which great train robber escaped from prison in 1965 and fled to Brazil?
8) In what year did the Channel Tunnel open?
9) Anagram: ADVANCE IDOL IRON (clue: famous historical figure)
10) Which city in Lebanon was divided into two by the "Green Line"?

The answers


The excuses
0) You may notice that I ('the statistician') was not present for this one (I was in Seattle, and we've yet to persuade a quizmaster to let one of us 'Skype in') so these excuses are from a slightly different perspective to normal.

1) As always, the film year question proves tricky, although the team's guess of 1988 was more off than usual. Meanwhile, in trying to find an interesting fact about Bill Murray, I stumbled across this quote about his experiences of filming Garfield: The Movie, and it's a pretty good read.

2) It's not entirely clear what was intended with the question here. The implication was that it is a shade of a colour coming as half of a two-pointer which also asked about verdigris. We presume the question referred to the greyish-pink colour of the Damask rose but our answer of pink on the night was marked incorrect. A lesson, I think, in how questions with colours as answers are often dodgy: a pub full of people are unlikely to all agree on what shade something is.

3) One of those questions where you know, because it's being asked in a pub quiz, that the answer is zero or one. Unfortunately they went the wrong way.

4) The team put What Lies Beneath, which was a surprisingly good punt: it stars Harrison Ford playing a doctor, just not one called Richard Kimble.

5) Quite glad I wasn't around for that one as it's my favourite film in the Die Hard series (because how can you not love an action film with lateral thinking puzzles?) and I almost certainly wouldn't have got it either.

6) Apparently the team got as far as identifying the song was by Santana, but couldn't get the title (admittedly hindered primarily by not being able to think of any Santana songs).

7) Finally a question I would have been able to help them with, but included here for the sake of journalistic integrity (or something). To quote directly from the team's post-match report: "We'd lost interest by this point and for some reason put Reggie Kray.". At least he was also a criminal.

8) When I saw this question, my first thought was the correct answer, but then I second-guessed myself because I knew that was the year the National Lottery started. So now we have an easy way to remember both.

9) The team insist that they repeatedly misheard the clue as ADVANCE IDOL ISLAND which unsurprisingly hindered their anagramming slightly. Apparently they still suggested the correct answer at some point, but fairly reasonably weren't convinced by it, and never got around to sticking it down.

10) We finish with another question I'd've been able to help with as, unlike these two, I am able to name one (and only one) city in Lebanon. The only other thing I know about Lebanon is that it's easy to remember what their flag looks like because of its famous cedars.

Thursday, 2 May 2013

Bonus Question
Are Manchester really the best at University Challenge?

Monday saw the conclusion of the 2012-13 series of University Challenge where, much to my annoyance, Manchester avenged their quarter-final defeat to University College London to take home the trophy. It's the first time the title has been retained since Magdalen College, Oxford secured back-to-back victories in 1997 and 1998 (an institution Manchester now also join on four victories in the prestigious quiz).

Teams from Cottonopolis have become a familiar sight on Monday night BBC2 TV schedules of late: in the last eight years they've lifted the trophy four times and only failed to make the semi-finals once (when their team didn't make the cut to appear on the programme). There seems little questioning their status as the most consistent institution over the years, but do the data back this up? While we're at it, can we identify the Best Team Ever? It's time for a statistical adventure.

First up: what data? University Challenge first aired in 1963, running for 25 years before being taken off air in 1987. Picked up again eight years later, it has been a fixture of television schedules ever since, and with full round-by-round scores available from 1995 onwards it's this 'Paxman era' where we'll be focusing our attention.

Now we have some data, the next question is how to compare teams both within and across series. At a basic level things are straightforward: it seems fair to assume that the series champions were the best team in that series, while the runners-up were - by definition - second best. But how do you compare the two losing semi-finalists, or compare this year's winners to the champions from 1995?

There are, of course, numerous ways we could derive metrics to compare teams (indeed, there are plenty of established methods in existence) but I wanted to build my own, as-simple-as-possible, model based on three 'intuitive' principles:

1) Progressing further in the competition is better
2) Losing by a small margin is better than losing by a large margin
3) Losing to a team that goes on to do well is better than losing to a team that goes on to do badly

The first of these is made straightforward by the (relatively) consistent tournament structure on the show. Since 1995 every series has featured five rounds, so I decided to assign every team a Baseline score from 1 to 6 based on their stage of elimination from the show: 1 point for losers in the first round (or highest scoring losers who lose their playoff match), 2 if you went out in the second round, and so on up to 5 points for the losing finalists and 6 for the series champions. This measure is the first element of comparing teams within a series: a higher Baseline score means a better performance. The problem is how to separate teams who were eliminated at the same stage of competition, which is where we try and incorporate the second and third principles.

How far a team progressed in a series is one half of how good they are: the other, of course, is how they fared against - and the quality of - the opponents they met along the way. For opponent strength we have a ready-made statistic in their Baseline score, and we can use the scoreline from each of their games to see how well they did. A typical approach in tournaments is to look at the margin of victory or defeat - the 'spread' - but I decided instead to look at the proportion of the total points scored in a game that were picked up by either team. This means that the effect varying question difficulty across rounds (or even series) is moderated, and also gives us a handy metric of 'performance' in a game in the form of a percentage: if a team lose 150-50 then they picked up 25% of the points in that game, while if they were pipped 155-150 it would be almost 50%.

By multiplying the percentage of points scored in a game by the opponent's Baseline score, we get a measure of performance which I've imaginatively called Performance score. For example, suppose you lost in the first round 150-50 to a team who went on to win the series. Your opponent's Baseline score would be 6, while your points percentage for that game is 25%. Combining these gives you a Performance score of 25% x 6 = 1.5. Your opponents, meanwhile, bagged 75% of the points available, but as first round losers your Baseline score is just 1. They therefore pick up a Performance score of 0.75. It might seem a bit odd that you get more points for losing than they do for winning, but remember that this measure is only used to compare teams who were eliminated at the same stage of the competition, so this comparison doesn't really mean anything.

From here, we can calculate every team's average Performance score across all of their games, giving a measure of the strength of their opponents and how well they fared against them. We can then use this metric as a tie-breaker to separate teams who have the same number of wins. For example, if we apply this strategy for the current series, we find that of the two losing semi-finalists (New College, Oxford, and Bangor) Bangor would snatch third place. (Admittedly, I was a little surprised by this as New College seemed the much stronger team, but a quick look at the results for the series suggests that this isn't reflected in the scores. For example, Bangor defeated King's College, Cambridge, far more convincingly than New College did.)

In the same way we can also compare the 19 Paxman-era champions to see which team were the most dominant in their series. It will come as little surprise to regular viewers that the 2009 Corpus Christi, Oxford team (aka Corpus Christi Trimble) would have topped this particular list, but as they were disqualified for fielding an ineligible player we instead find the 1998 Magdalen, Oxford squad come out on top. This team were a little before my time, but a poke through that series suggests that the scoring algorithm is doing a reasonable job: their quarter- and semi-finals were Trimble-like demolitions before a relatively narrow victory against Birkbeck to lift the trophy. (Coincidentally, Magdalen also take second in the overall standings, with their 2011 team posting similarly strong statistics.)

What of our original question, though? Which institution has been the most successful at University Challenge in the last 19 years? For this I assigned every team a rank within their series (first based on how far they got in the competition then using average Performance score above to break ties). From here there are then two ways to identify the 'best' institution: their average rank or their total rank. If we go with the former then, predictably, it's a team with only one appearance who top the list: London Metropolitan may have only made it onto the show once, but their third place in the 2004 series gives them a hard-to-beat average. Really, though, as getting through the show's audition exam is itself an achievement, it's total rank that represents a truly consistent institution, and on this metric it's Manchester who take the crown. The top of this list is, however, dominated by teams with multiple appearances: with a whopping 15 appearances Durham are second despite never winning a series, while Magdalen, Oxford are down in seventh with 'only' 9 appearances.

So there you have it, unequivocal proof that Manchester are doing something right, although if you're not totally convinced by my methods I wouldn't blame you. Merely while writing this up I spotted at least half a dozen holes one could pick in my metric, and I fully anticipate being alerted to some better, more established approach. Still, it's hard to deny its simplicity, and in any case the most important thing is who comes out on top. I don't think there are many systems that would suggest anything other than what mine has here: Manchester will once again be the team to beat next year, and Corpus Christi wuz robbed.