Eclipse Of Venus – 8 (Think, Thought)

Now I will boldly go where every man (and woman) has gone before, at least almost all. This is the area of the statistics.

There is no contradiction on this subject (of every person being ruled by stat), and if there is, I would use my veto, undoubtedly I am the boss here and I won’t have to ask for anyone (including…)’s permission to declare it, (as long as I am incognito).

The world is driven by statistics, and our personal universe is a small part of the world. Whenever we talk about the GDP, we are in statistics, the trade balances, it is again that; the poverty level, the unemployment, the industrial performance, the inflation, price-rise, economy, elections, crimes, weather and rainfall forecasts… every part of our daily life is driven and controlled by statistics, and it affects us much more than we would like to believe.

Based on the crude prices (average= statistics) and the expected trend, and the processing costs (statistics), the domestic prices are fixed and that affects even the costs of the vegetables in the market. Based on economic parameters, the government decides on taxes, duties, bank interests and again it affects everything that we have or need. Let us not underestimate the role that statistics play in our life, though probably we would end up in doing it, since it would be difficult to overestimate it.

Add it to the other standard statement, which has been there, to the heartburn (of the statisticians) and due to the heartburn (of their victims), “Statistics is Bull-shit” and we now can appreciate why the life is so shitty.

I will now take some examples. Some of these are oft-repeated, usually with a derisive snigger, and some are not yet thought of, but could still end up on the same field (derision). I will again try to have a look at why everyone says so about statistics. However, before that I will split the people based on the way they look at the same subject.

Since all the animals are not equal, I will keep the “More Equal” animals at one place and the “Less Equal” at another and try to decipher how people with different view point look at the same object, and how statistics extrapolate it and tell us what they should have been doing.

Obviously if they didn’t, it was their error, not statistics or statisticians. The errors that creep in, is due to the inherent stubbornness of the human and the mules. Neither of them want to follow the set rules, even when they are wrong and know that. Mules do it, only in idioms, but humans? Probably as Idiots?

There are many ways to gather the data to analyze in statistics. The type of the data of course is based on the particular subject of interest and what we want to know, and the same defines the source of data too. One of the critical aspects we have to keep in mind is that we should select the proper question to ask (the answer would be the metrics) and from as less people as we can. Less number of people would make our life, in interpreting, simpler, but it would also make us erroneous if the people are not properly selected. These people, like the Jurors should represent at least majority of diverse mind-sets and opinions on the subject. This is the principle of unbiased and representative sample selection, and it is rarely achieved when the universe, and hence its sample members, are human beings. Instead of blaming statistics, probably we should keep GIGO in mind. The Statistics churns out Garbage, when the input (not only respondents but statisticians too) are Garbage. 

I would for the time being neglect the existence or the need of Municipal Truck and then look at how various methods would behave, when the people have been asked to select from a limited number of options. It could be selecting one or few candidates for employment, it could be the best invention which is to be awarded noble prize, or the Oscar, or the best candidate for the President ship of USA (for India I would put it as the best political party).

Limited number of options need not be a very small number, nor the ‘few’ vacancies to be filled. There could be 200 candidates to fill up a job opening of 10. However larger these two numbers (candidates and slots) are, more difficult it would be for the people to make ‘the best’ selection. Still it might be possible to select approximately the best, or selecting a large number of people from the group. Assume that the selector is told that he should not only select, but also rank the candidates from 1 to 200. If there no written/ other scientific (e.g. psychometric) test are conducted for this purpose, it isn’t going to be easy or non-controversial. 

The selection could be done in various people or group of them, using various methods, 

  • By subject experts- is common method of choosing from options. The eminent experts are asked to give their opinion, which might include suggestions for changes in the method itself. Eminent physicists are asked to select the Nobel Physics Prize. A group of renowned physicists are asked to select the best discovery of the century etc. Prominent actors are asked to select the one who should be given ‘Oscar’ for best actor.
  • By a small sample of the affected persons, where we ask the people of their choice. It could be an open-ended choice, where people could be asked to go by memory, or a hinted one, where the selection would be among options. In either there would be many variations. If I ask about the invention/ discovery of the century, for some it might be TV, some would but their money on computer, some more knowledgeable on Transistors or photo-cells etc.
  • It could be through whole population where everyone would be asked to give their opinion. Of course, we know that all can’t be coerced into voting, but still the size could be very large so as to reflect the extremes. Probably non-voters would be those without any specific choice and don’t know what to vote for. Of course there would be another group, those who won’t vote since they don’t find any purpose served or consider it to be a futile activity (we find that in elections quite often).

What would be the outcome?

  • In the first case, it is what the expert think that the people should think.
  • The second case, a group of people think what the other must be thinking and
  • the last one, the whole population takes their stand on their thought process.

Since the outcome is average (even the rank is averaged), hence the variations are smoothened and we should get the normal person’s opinion. This Normal person’s definition of course would be different. For the experts, a normal person would be something else, whereas the other two, it would be more or less the same. For each, to their own (average) point of view.

Even assuming these would be different due to the above logic, but how much different would they be? Obviously experts would differ from common people. But if I take the common people as not too common, i.e. those with a sufficient degree of literacy, would they defy Amartya Sen and go ahead and vote for Modi? Or it was due to those village and non-intellectuals due to whom Modi has swept the elections, the educated and literates voted as Amartya wanted them (and all others too) to vote?  

Elections being with anonymous voting, we don’t know the background of individuals who has voted for a particular candidate/ party, I can’t make my judgement based on this. Of course there could be still some sort of analysis and hypothesis made. It would be the Urban/ Rural patters as well as the polling booth-polling booth trends. Usually the polling booths would have some type of pattern, e.g. some would be from affluent localities and some other would be from slums. Instead of going to Election Commission to get the data, I am not sure they maintain or even are supposed to divulge it (that might result in a particular locality being put on firing line), I will take some other example. 

This being just past the Oscars, I will have a look at how the Oscars behaved on these criteria.

The first group, the experts are of course the Oscar selection committee, which is formed by the related subject experts (e.g. best actor by actors), and probably some top-rated critics too. This is a common practice for any award, whether it is Oscar, or Nobel, literature or artistic sports (Gymnastics, Diving, Synchronized Swimming, etc.). To simplify it further, I will go for a single and simple parameter, which would be easy to respond to, not only by the experts, but the common audience too.

Which in your view is the best picture of the year?

And the nominations, selected by the experts were:

  • Arrival,
  • Fences,
  • Hacksaw Ridge,
  • Hell or High Water,
  • Hidden Figures,
  • La La Land,
  • Lion,
  • Manchester by the Sea and
  • Moonlight.

After the deliberations, the Jurors (crème de-la crème) said that the best of the best (grammatically wrong, but in vogue; there can’t be anything better than the best, leave aside the superlative of the superlative) was Moonlight. I don’t think they divulge the ranking except the winner, though I assume based on the number of major nominations, it would be La La Land in second position and then Manchester By Sea in third. After that there is confusion, so let me guess by number of nominations as Arrival, Hacksaw Ridge and after that the rest four were bundled together. This is a pure guess work and need not be right, but it might not matter for my work here either. I am going to take only three categories,

  • Winner(s),
  • Loser(s) and
  • Extreme Losers (not even nominated).

How did various groups look and placed individuals in either of these three slots? 

I collected the scores of

  • Oscars and a website named Metacritics –  both provide me with the ranks in the opinions of expert group.
  • IMDb, TMDb and Rotten Tomatoes – would give me the insight of the Ranking by public (plural and large).
  • A few Individual Opinion (one Blog and a few Lists)

When I looked after ranking, frankly I wasn’t too surprised. Unless I am too naive and prepared to be shocked at each and everything, it won’t even make me flutter my eyelashes. However I still am listing here, so that these don’t flutter other’s either, when they come across things like this in future. 

 

Not being surprised doesn’t mean I wanted to see a pattern like this. The top three by individual people and groups were replicated, but is exactly opposite of what the critics wanted us to say; the subject specialists (Oscar) took middle ground. However as a saving grace, the top three for each set were top three, may be in different order. After the top three there was a bit of difference of opinion. But it wasn’t too much. Not too bad. I would say.

But wait for a moment and reflect on it. The group data is clubbed by joining a few separate groups. Each of them would have their own take on the subject.

However, since each of the groups consisted of sufficiently large number of individuals, the group think should be parallel, i.e. group 1 average should be more or less same as Group 2, Group 3 …. I don’t expected the voters in IMDb to be of a particular mindset which is quite different from the people who prefer to vote in the websites of TMDb or rotten tomato. None these websites have a restrictive criterion (like for Oscars or Meta-Critics), which would deny any one who wishes to vote, the right to do so. In addition, I can assume there would be quite a large common voter base in the three websites. 

If it is so, then as I said, the group thinks would be convergent due to the large number of respondents in each and the preferences in each would be running parallel. Did it happen?

To make it more visible, I have coloured the picture rather than leaving it in black and white. The top third (#1,2 and 3) in the opinion of a particular group is green , the next three (#4,5 and 6) in blue and the last three in pink.  Naturally if the groups thought in the same way, the rows would be of same colour. 

This mosaic wasn’t what I wanted to see. It is pardonable when I talk of individuals, but not when I am discussing similar groups.

Except the La La land, which most have ranked highly (except Rotten Tomato, who had thrown a bucket of pulp onto it) nowhere else the groups have any sort of a unity of opinion. Moonlight is 2nd in two groups and the penultimate in one.

I can re-adjust the statistics, and get another ranking. Sometimes the difference in ranks could be due to a fraction of point, and sometimes there would be large difference. To overcome that, I can try to do some adjustment, by taking raw scores and then ranking. (the 10 point gradings I could multiply by 10). This is not statistics and I should in fact weigh the individual averages by the respective number of respondents. But that is too much effort for nothing very significant, unless it is my thesis work. So, I will do a non-statistical approximation. This worsened the situation, if anything,

Hidden Figure”, literally hidden till now, suddenly came out of its shell like the tortoise and won if not the gold, at least a bronze. I am not going into the variation between the individuals, since that is expected to differ and they do. But the groups? Especially when they have been taken as unbiased as possible?

No one can now morally blame the psephologists who regularly hit the nail every time dot on the shank, never probably intending to hit the head, at least as far as I can make from the result. For example, in recent Uttar Pradesh Elections, 

Nearest was the second row, but even that was off by more than 50% in quite a few columns. Another, from Uttarakhand is a bit better,

Is it a wonder that the politicians rush to Astrologers and Seers for blessings and more dependable predictions than what they can get from these people? It is even more significant when I realise that these predictions are based on exit polls, not they are not pre-poll opinion survey. Probably Election commission is right on its stand. One can’t spread blatant misinformation to sway people, when the elections are held in instalments.

To be fair, these may not be deliberate lies. These are predictions based on group interviews (who did you vote for). The problems of these type data, we as statisticians are well aware of, is the sample bias and unless you eliminate that, the data is of no value. However, over the years, the sample-based predictions have rarely come anywhere near the final population result, and that seem to indicate that these statisticians are either not learning, or the people are smarter than them, in confusing them. When the pollsters plug a leak, the interviewees open stealthily a few more holes in their assumption.

The elections voting, for quite some time, are drifting away from the old. There was a time when we all knew that my mother would openly vote for Cow and Calf (Mrs Gandhi), and she would stick to her stand whether it was before, during or after Emergency. My father wouldn’t look at anything else on the ballot paper other than, I think it was Lamp (Jana Sangha). To make matters complex, I had (and still have, though with age they too have become cold-blooded like me) who were ex-Naxals (the original ones, not the so-called of today) and obviously with known loyalty. Despite so much clash of opinions, there was no civil war, not even unrest, and absolutely no attempt to convert. It was all peaceful and harmonious co-existence and that too within a family. It is rare to see that feature now a days even in a locality, where people are marked and targeted for their political inclinations, not even affiliations. How could it matter? People don’t pause to think, something that I have seen from very close quarters, that it doesn’t.

I had put the issue up, since this has created an extra problem for psephologists. Now except some die-hard castes and cadres you can’t assume any loyalty. Taking a proportional representative sample based on this aspect, and assuming it would be an unbiased sample, would be wrong. In addition, in these exit polls, people might not speak the truth with others listening. This could result in major personal repercussions. The groups from an area, probably of same religion/ caste go together to vote, but in the current situation, it is not necessary that they are going to vote as a block which they used to do earlier. A specific section still does it, but the majority is unpredictable and now it seems that they are judicious and thinking. This has clearly come out in this result, not because people voted for a particular party, but because they voted for a subject, probably irrespective of the whip of their caste/ local leader. The subject may be right or wrong, I won’t dive into that controversy, but there is no doubt that it has attracted the imagination of the people, and made them, even sometime in stealth, to support it.

If I select a sample from a group and presume that it would represent the whole group, and the person would be speaking the truth, it could be a potentially wrong assumption. With so many distractions, the psephologists would have their hands full, to find a more complex model for the opinion polls, which probably they are as yet unable to do. However, this is almost out of the topic here. and I shouldn’t get too distracted by it.

The online web metrics, say for the movies that we are talking about now, may not suffer from this problem. But a hint of it exist there too, so probably these too are not that unbiased. The people having access and intentions, on keyboard pressing the stars may belong to a certain predominant class/ background and hence those inherent biases would be reflected here. I don’t find any other plausible explanation of what I see. 

There is a big difference in the three methods, the individual, and the two groups, the expert and the public. That is with reference to the time frame or should I say memory?

Experts:

For the expert group, for final ranking, I assume that the movies are screened and they watch them, like they do in various film festivals, and then they do the ranking. The other two groups, being from people similar to me, I know that methodology. The movies were viewed over some considerable time duration, not one after another,  and then they have been asked (or did it on their own) to rank. 

Individuals:

In case of the individuals preparing a ranking list, most likely they are comparing a movie watched yesterday with another one watched months earlier. Thus, in one case, the impression is fresh and in the other, what has been carried over time.

In addition the movies also carry something called “Viewing Experience”. It includes where you are watching, how was the place, behaviour of the people, type of audience, with whom you have gone, under what frame of mind, and many other factors. Most of the times these extraneous factors heavily colour the experience than the merit of the movie itself. 

This experience gradually wears off over time, but it takes some time for the viewer to become normal. Thus a movie which had been normalised (having seen quite some time back), can’t be compared to one which I have seen yesterday, and in addition had some strong experience (could be positive or negative) associated with it. When I place this new movie, with the all the old ones, can I place it at correct position? 

In addition the memory as well as the experience in between too plays significant role. Suppose I am in a travelling job. During one of my excursions to another town, I come across a delicacy that I loved. Then I come back, and again my menu is the same, which has become bland and boring to me, probably by repetitions. After some months, in some other town, I again come across a different or maybe even similar delicacy. Would I be able to rank which was better, from my view point? I don’t think so., unless I get both of them together once again on my table, side by side.

Same simile could be transferred to movies. You loved a particular movie and then after that you have repeatedly been exposed to some substandard and boring ones. Every movie naturally isn’t something what you would rate high, or are highly recommended, but still to kill time, you would sit in front of screen. After some significant time (naturally, good movies are not released daily), you might come across again another one that once again caught your fancy. It would be easier than comparing foods, where only the highly volatile palate memory, is to be relied on. However, unless you watch them once again back to back, most probably the movies of the interregnum would either make you think the first one was better than it really is, or the last one so.

But this too have a lacunae to make the issue even more complex. Many times, the movies won’t be worth second view. Anyway, you won’t like them as much on second view, as you did for the first time, when it was novel. The best example would be the suspense thrillers, say of Hitchcock. This, if attempted, is definitely going to further skew your opinion when you carry out ranking, that is if you do the back to back watching. As far as I am concerned, I don’t think I am not going to, just for the purpose of comparing the two (or more), which isn’t an important activity for me. I would rather go by memory, one of a fresh juice and other of mature wine. The merit point awarded by me won’t only be depending on my palate or the quality, but also my disposition and mood, now as well as then.

The movies might suffer from exposure too. When I watch a movie after some days of release, especially when a few of my friends have watched it and given their opinion, and then it is my turn to go to the theatre, I am already conditioned by these opinions. If I am an iconoclast, I find small errors and try to put the highly-praised movie down, and if I am conformist, I overlook major goofs and push it up. When a third person checks the ratings, he won’t know under which situation and mind-frame I, or others, have rated these, and hence whether the marks are really unbiased or not.

I would keep it in the mind that it is an individual exercise, so no averaging to negate the above effects would exist.

Groups:

The web metrics too has its drawback. I watch the movie, and when I next time log in to the website, immediately or within a few days, I put my ranking points. Till this everything is alright. Every time I allocate marks, it is for a fresh fruit juice, and I won’t have to depend on my memory. The marks are independent and not ranking marks. It is like what we say zero based budget, where we forget of the past, and allocate it the number as I felt it deserved. May be, as I do, 6 would be average, can watch, and 10 would be excellent, which I would watch again as well as want others to (Movies to watch not waiting for your death), whereas the low marks would be the one I would watch, or recommend, to firm up my intentions to commit suicide.

It works well in the average zone, without any controversy or complications. At lower edge, too it works well for me, since except for some elites, like the UCLA people (Razzy), none are actually bothered of the (de) merits of these type of movies.

It however does have some complications in the extremes, both sides. Let us say I have watched a movie and given it 10 marks. After some time, I watch another, which too falls in that zone. In my opinion, the first one was worse (or better) than this one. To fit this one, I must change the marks of the first one to say 9.5 and then put the new one at 10. That is to maintain their relative positions in my merit list. But how many times would I do that?

Changing the marks of a previous is anyway difficult, I most likely won’t even take the effort to check on how much marks (or stars) I gave to the previous one. To do that, probably I have to check several movies of similar kind (in terms of their appeal to me). Only after that, I should make up my mind on the marks I should allocate to the current movie, so that it or the other, retains the edge, I felt it must. Probably I would have to readjust marks for quite a few movies to fit every new one. It isn’t really an assumption. Though not in the web, but I have an index file in excel, where I often try this re-marking to put a new one in place. 

Even if I do so, this too would suffer from all of the above malady. However there is a saving grace in terms of the groups, which always efficiently removes the creases that have been caused by individuals, and the effect of “Viewing Experience” would be averaged out. 

There would be another critical aspect in the ranking, the external ambience.

Let me look at it in this way, the list of movies that are released in 2011 had they time travelled, i.e. instead of these movies being released in 2011, all of them had been released in 2016. and the movies released in 2016 were only these, none else.

Will the marks of the viewers still hold? Will the various rankings whether by individuals, by groups or by experts remain same? Will the Oscars too be awarded to same movies/ actors/ features?

Assuming no political interference, probably so. By political here I don’t mean by political party, but by certain compulsions of politic (e.g. you haven’t given a single award to a particular aspect, resulting in a media/ social media storm? Give all majors this year to it to balance up).

Fortunately, the others, the individuals and groups themselves being part of the media/ media instigators would be more apolitical and hence more likely to repeat what they did when no time travel existed. Probably “The Artist” would still have been the best movie by experts and “Phantom Of The Opera” topping the public opinion, though these same public would contradict themselves once again and stand in long queue to watch Witches (Harry Potter), Honey Dipped Candies (Twilight Saga) or the Warring Robots (Transformers).

Now let me move the clock further and say instead of 5 years, I turn the clock back by 50 years and release those movies for the first time now. Would the winner by critics among “Alfie”, “A Man for All Seasons”, “The Russians Are Coming, the Russians Are Coming”, “The Sand Pebbles”, “Who’s Afraid of Virginia Woolf?” be still the same?

Theoretically yes, since the movies by the experts are selected by the excellence, and relative excellence can’t change over time. If there is a technology gap from today, it would be there, for more or less, all the movies from the year, out of which the list is prepared.

But if the exercise is actually repeated, most probably the list might not only get an upheaval, but a few which didn’t feature could get into the list at the expense of some of the nominees, which even may be the winner. That is due to the secret ingredient. However logical the experts might try to be, and attempt themselves to divorce from the ambience, they can’t. Their choices are bound to be modified by the then present socio-economic-political situations as well as the societal-trends and preferences of the age. That would reflect in the list as well as ranking.

The major movies (as per IMDB rank as well as box office of the year) are listed here and I again find a huge disconnect between the critics of that particular year with the viewers. The viewers overwhelmingly voted for “The Good the bad and The Ugly” whereas the box office voted for “Who Is afraid of Virginia Woolf?”, which isn’t too bad for viewers too, at #2, though the Tomatoes doesn’t like it that much, making it to rot at #4. The Expert’s advice, A Man For All Seasons, has interestingly maintained #3 in all the three columns (Box Office, IMDb as well as Rotten Tomato).

(The Oscar Winner for best movie is green and other nominated are blue)

Here I should note one factor. The Oscars and the Box Offices belonged to the specific year (e.g. 1966), but the other two columns are aggregated from 1966 to today (first quarter of 2017).

Had the Oscars been judged for these today, I wonder which would have been the winner, Virginia, Alfie or Good Bad and Ugly, or would they have still insisted on the seasoned man?

It is very difficult to presume what would have been, if all the movies of 1966 were released in 2016 as new movies. It is frankly my limitation, since I can’t prevent myself from thinking with a hindsight. But still I would assume that though the winner could (not necessarily would) be different, but the composition of the nomination would remain approximately the same.

Probably only the Marauding Russians would be replaced with some which are Worse and Uglier. I am not comparing the relative merit of the two movies. It is since the current psychology and societal view would be in action here. Due to the current US Elections, and the implied Russian involvement, they are back as the cold-war hate-object, and that emotion is quite concentrated in Hollywood community.

That movie had been a bit sympathetic towards Russians and hence is unlikely to get the nod of ‘Unbiased Experts’. The Russians in this movie of course were still a bunch of bungling fools, but that is because it is Hollywood policy towards all non-Americans. This attitude is naturally more directed towards the foes (imagined), but the friends too really don’t avoid the honey trap. In this movie, the enemies had a human heart, unlike the ruthless James Bond Russians or Indiana Jones Nazis who would scare the heroine to death, even while turning to dust themselves.

These stereotyping of Russians and Nazis are the rules, not exceptions in Hollywood. I have referred to these as examples. There are many movies with much worse bungling Russians and Nazis (the current fashion statement includes North Koreans too) with hearts, made of harder and colder stones, that were lapped up by the audience, critics and the award-givers in tandem. There goes the basic assumption of divorce of the jurors and critics from the general social group-think.

However, since most of the movies were not blatantly political, except may be in the 1940s and during the various American Wars to impose democracy and Human values (as per their definition of it) on Vietnam, Iraq etc., They have never agreed, or even thought it worth reflecting upon, what Bernard Shaw, said disapproving the post World War trials of German Prisoner of Wars (and Leaders), asking why this act of self righteousness was being done, when we all are criminals too?

I would like to assume that may be except the winner, and a couple of nominees, other movies were on merit, or at least what the experts thought about it.

If that is so, then these non-political non-agenda-driven movies were the best among these neutral types. It should be more so, since they had won the nomination against all odds (the public hysteria of a subject), and hence should certainly retain their place in the list. The Political and Agenda driven might go out if the current situation has reversed or at least abated to homeopathic level. They could obviously remain if the politics/ social condition is still relevant, or has come in relevancy again.

It would be safe to assume that these “Other” movies were the best of the year, at least as far as the experts judged them. For them at least the other bias weren’t there. In addition to that, it could also be safely assumed that the agenda movies too were the best in their agenda-class, e.g. if it is Anti-Communist, it would be the best of that year among the Commie-bashers.

May be some agenda that was not in vogue then, is of prime importance now (e.g. refugee crisis, or certain threat). In that case, probably some movie that were considered insignificant by the then audience and critics alike, would jump into fray, if the list is prepared again.

But I would note that it is unlikely that a movie in insignificant social subject would be made on a subject which would become of prime importance after half a century has lapsed, unless it is truly Avant Garde and with a far-reaching prescience. Probably in that case too, it would be among “honorably mentioned”, like the few Sci-Fi movies or if not movies, then literatures like 20K Leagues Under the Sea, or various of H.G. Well’s books.

A few exceptions exist, there were movies without agenda, which were too difficult to be understood, which had been understood and appreciated much later, not unlike Keats, Van Gogh, Bach or even the Wild name-sake of the Academy Award, who had now been finally pardoned, 117 years after his death, which occurred due to his not pardoned at that time, for certain act that had done.

But not many such are there, and I would rather overlook these few blips on the radar, or may be consider them on case to case basis. I can’t do that for all, but for a few individuals, it should be possible. 

I would thereby make a basic presumption, that the experts and critics are likely to keep the nominations, with at most few changes, and name within the envelope would either remain the same, or if it is someone other, it would still  be from the original nominations. Only in rarest or rare case the encroacher would also be the best movie. I don’t think this assumption is too far fetched.

Now I would leave the experts and go to the publics of their choice. This is much simpler than the above. Let us look at the public opinion portals. These portals were born only in the last half decade of the previous millennium or quite deep into the first decade of the current one, only after the software and www has entered deep inside our home and became omnipresent in our life. When I go and vote for a movie in these portals, there would be two categories,

  • The current movies would be based on current state and also, these would be fresh in the memory. Most probably due to this reason, data would be a bit skewed (biased). That is natural and can’t be avoided. I remember some movie like Bahubali started with IMDb ration of 9.5+ and I am sure the sequel too would follow the footsteps or even better it. There are of course many reasons behind it, the first period audience are attracted by the reputation of the movie and they would be ranking it very high (or very low, if the expectations didn’t match). In addition, movies that don’t let you think, the one with more glitz and glamour and with no story, like this movie or the “Lord Of the Rings” Series, or the “Star Wars” series, would get much higher audience’s “Thoughtless” ratings than the movies with story and possibly some message, which get the “Thoughtful” ratings. In those cases, the relevance of the story and the message would bring out a better (an obviously lower) marks by the more elite audience (who else would watch these?). To justify the statement about lower ratings, I would like to point out that the discerning audience will rarely award full marks to even exceptional movies, which the other would be more inclined to Bo-Derek these glamourous nonsenses.
  • What happens when the movies age? Now the rankings drop e.g. Bahubali is now at 8.3 (though it is still too high, and I would place it somewhere between 6 and 7). As the time pass, and people watch it after few years, may be a second view and then put their opinion, the marks slowly move to their rightful figure. Obviously, a movie that has been released much before the system started, would almost start at where it should belong. In addition, since it is dynamic, it would also take into account the annual inflation (i.e. public perception changes) of the period, which is around 20 years now (mid-nineties to today).

Thus, I could safely assume that the old movies would be almost optimally ranked as per these scores, having taken care of the longevity and universal appeal, especially to more discerning audience (who else would forego the glitz and watch classics?).

In addition, I would also like to hypothesize, due to the type of audience that would put their seal on this, that these orders of ranks would, at least approximately, be replicating the ones made by the experts and critics.

All these data are available in uncopyrighted domains for me to tap and prove my hypothesis,

  • Current Expert Opinion on the longevity and excellence – AFI 100 years 100 movies.
  • Audience call on these same factors – IMDB and Rotten Tomato Scores,
  • Expert opinion of the time, (Oscar awards and nominations for the relevant years).

All of these are group opinions and none are individual, hence it should filter out the individual biases through averages.

The first two should approximately match, and these sets should be exclusively be from the third set (which is much larger, since I have taken not only winners but nominations too).