When trying to make a reasonably objective analysis of a problem, a good rule is to wait until the evidence presents itself before coming to a conclusion. And this works pretty well with physical problems. Once we enter the world of ideas this natural process can get bogged down. The rule has value in its attempt to leave the "me" in the closet. Once "I" have laid claim to something there remains an ongoing interest in seeing this viewpoint through. The strength of our interest grows with the depth of its network, but even if we have only declared the opinion to ourselves we can become powerfully committed to its success (or at least it seems this way to me, it seems like I have read good supporting evidence to this but I'd have to look into it).
So this leads me to a semi-obsurd view on just about everything these days. Basically my line is that "I am in fact gathering mode." Its too early to come to a decision. When we talk about deeply complicated issues, i.e. what drives growth?, there are experts that have been working for decades that still don't have clear answers. It seems better to error on the side of caution here than to pre-commit and bias a lifetime of research. My fear is that I will never deem it acceptable to get out of fact gathering mode, never being able to commit to a policy side. Bad thing?
But then again, I doubt people decide on their favored policies in such a logical Decartestian manner. For its when you get out in the real world and are forced to lay claim to something that you find out what your "beliefs" are. Only when we are constrained by career paths, families, and money do we find ourselves more clearly "understanding" the truth. A well financed student such as myself lives a comfortable life and has claimed little intellectual territory (needless to say I have publicly or professionably claimed no territory). I can't help but feel embarrassed by the favorable circumstances that allow such an attitude. But if your objective is to find some sort of truth in social science, there are advanatages to a leisured self-motivated lifestyle. (Unless of course you seek to justify this leisured lifestyle through your work.)
Tuesday, August 18, 2009
Tuesday, June 16, 2009
Education is not about learning
So I'm going to go on an uncharacteristic rant here. Mainly I've just got to get something off my chest and I'm too ashamed to say this out loud as I realize I may simply be a sore loser.
I've just finished a difficult math course and received a dismal grade. This has been quite a setback as I've put monumental effort into this course. But what's angering me is that from all I could see over the course of two quarters with this professor, she showed no signs of caring that I learned anything about the material. The professor seemed only interested in engaging in a contest to see who is the smartest and weed out the weak links. Now of course I should have expected this but angered I still am.
This is the stereotypical example of a brilliant professor at a prestigious school being an obliviously incompetent teacher (although I can't overlook the more sinister possibility that this was an overt strategy). The professor came with high credentials and unequivocal mastery of her trade. However, she seemed completely unaware of the structure of the social environment she was creating. She was more likely to mock a student than to engage in search of the underlying confusion. Her primary tools were fear and an extremely demanding course load.
Now a challenge is exactly what any interested student needs to progress. However, discretion is required in setting the bar. Lacking that discretion, her office hours and discussion sessions became mandatory attendance. Although this fact was never made explicit so for many homework sessions I didn't go to office hours (falsely assuming that we were prepared well enough in class), and proceeded to flutter my wings in the library for entire weekends, only blaming myself for my incompetence. Only later did I notice that upon attendance to office hours students would receive invaluable hints that would only then allow them to properly solve the homework. But not only must you attend office hours, you must attend both sessions and be prepared to discuss the problems almost a week before they are due to make headway.
Her discussion sessions broke another implicit student-teacher agreement. A discussion session serves to sharpen understanding of previous material. Not to mention the fact that she effectively discouraged all questions of the material, she went further to teach new material in the discussion session; showing utter disregard for academic rules. Discussion sessions are scheduled with the full understanding that not everyone will be able to make them. If the intent was mandatory attendance they would be scheduled by the registrar similarly to any normal class session. Effectively she is using office hours and discussion sessions to teach new material when the students have clearly demonstrated they have yet to master the given material.
But I can't be to angry I suppose. I knew I was in for this. I knew that taking this math classes was 80% a signalling effort to show that I am smart enough to go to economics grad school. And whether she realizes it or not she may be utilizing her strategy to weed out the lesser students in an effort to elevate the status of her profession. But of course teachers won't admit this, and schools won't admit this. I assume administrators, faculty, and donors of a prestigious university would state that they intend to be an institution of learning. And I bet they believe it wholeheartedly. The question is whether this is a reasonable goal or not.
I won't be a complete pessimist though. It should be a school's job to recognize that a natural incentive of a university will always be to promote pure difficultly over education in an effort to produce the smartest graduating class. And once they recognize this they should takes steps to more properly align their incentives back towards education. (I guess "should" is too strong here. On what basis? I guess because I'd like them to.)
The unfortunate part of this whole inquiry is my self doubt in my own motives. A highly plausible scenario occurs to me that I only engage in higher learning in an effort to signal my abilities. I act in self deception when I claim I go to school to "learn." Furthermore, I was perfectly complaint with the system until it punished me with a low grade. Only now I am I willing to protest my anger towards its unwholesome activities. And as I am the worst arbiter of my own motives I have no reply to this criticism.
I've just finished a difficult math course and received a dismal grade. This has been quite a setback as I've put monumental effort into this course. But what's angering me is that from all I could see over the course of two quarters with this professor, she showed no signs of caring that I learned anything about the material. The professor seemed only interested in engaging in a contest to see who is the smartest and weed out the weak links. Now of course I should have expected this but angered I still am.
This is the stereotypical example of a brilliant professor at a prestigious school being an obliviously incompetent teacher (although I can't overlook the more sinister possibility that this was an overt strategy). The professor came with high credentials and unequivocal mastery of her trade. However, she seemed completely unaware of the structure of the social environment she was creating. She was more likely to mock a student than to engage in search of the underlying confusion. Her primary tools were fear and an extremely demanding course load.
Now a challenge is exactly what any interested student needs to progress. However, discretion is required in setting the bar. Lacking that discretion, her office hours and discussion sessions became mandatory attendance. Although this fact was never made explicit so for many homework sessions I didn't go to office hours (falsely assuming that we were prepared well enough in class), and proceeded to flutter my wings in the library for entire weekends, only blaming myself for my incompetence. Only later did I notice that upon attendance to office hours students would receive invaluable hints that would only then allow them to properly solve the homework. But not only must you attend office hours, you must attend both sessions and be prepared to discuss the problems almost a week before they are due to make headway.
Her discussion sessions broke another implicit student-teacher agreement. A discussion session serves to sharpen understanding of previous material. Not to mention the fact that she effectively discouraged all questions of the material, she went further to teach new material in the discussion session; showing utter disregard for academic rules. Discussion sessions are scheduled with the full understanding that not everyone will be able to make them. If the intent was mandatory attendance they would be scheduled by the registrar similarly to any normal class session. Effectively she is using office hours and discussion sessions to teach new material when the students have clearly demonstrated they have yet to master the given material.
But I can't be to angry I suppose. I knew I was in for this. I knew that taking this math classes was 80% a signalling effort to show that I am smart enough to go to economics grad school. And whether she realizes it or not she may be utilizing her strategy to weed out the lesser students in an effort to elevate the status of her profession. But of course teachers won't admit this, and schools won't admit this. I assume administrators, faculty, and donors of a prestigious university would state that they intend to be an institution of learning. And I bet they believe it wholeheartedly. The question is whether this is a reasonable goal or not.
I won't be a complete pessimist though. It should be a school's job to recognize that a natural incentive of a university will always be to promote pure difficultly over education in an effort to produce the smartest graduating class. And once they recognize this they should takes steps to more properly align their incentives back towards education. (I guess "should" is too strong here. On what basis? I guess because I'd like them to.)
The unfortunate part of this whole inquiry is my self doubt in my own motives. A highly plausible scenario occurs to me that I only engage in higher learning in an effort to signal my abilities. I act in self deception when I claim I go to school to "learn." Furthermore, I was perfectly complaint with the system until it punished me with a low grade. Only now I am I willing to protest my anger towards its unwholesome activities. And as I am the worst arbiter of my own motives I have no reply to this criticism.
Friday, May 8, 2009
Good Judgment
Unfortunately I have no time to write these days, but I have time to quote. So here is a gem I just came across. This is a philosophy that I wholeheartedly endorse but was largely ignorant of in times past. From Tetlock's wonderful "Expert Political Judgment," (p.215).
Good judgment, then, is a precarious balancing act. We often learn we have gone too far in one direction only after it is too late to pull back. Executing this balancing act requires cognitive skills of a high order: the capacity to monitor our own thought processes for telltale signs of excessive closed- or open- mindedness and to strike a reflective equilibrium faithful to our conceptions of the norms of fair intellectual play. We need to cultivate the art of self-overhearing, to learn how to eavesdrop on the mental conversations we have with ourselves as we struggle to strike the right balance between preserving our existing worldview and rethinking core assumptions. This is no easy art to master. If we listen to ourselves carefully, we will often not like what we hear. And we will often be tempted to laugh off the exercise as introspective navel-gazing, as an infinite regress of homunculi spying on each other...all the way down. No doubt, such exercises can be take to excess. But, if I had to bet on the best long-term predictor of good judgment among the observers in this book, it would be their commitment--their soul-searching Socratic commitment--to thinking about how they think.
Wednesday, January 21, 2009
A Concise History of the Russian Revolution
I'm excited to start a new book today, "A Concise History of the Russian Revolution" by Richard Pipes. Thought this paragraph worth quoting:
This idea holds an irresistible attraction for intellectuals because it elevates them from the position of passive observers of life into its shapers. Their superior knowledge of what is rational and virtuous permits them to aspire to the status of mankind's "educators." While ordinary people, in pursuit of a living, acquire specific knowledge relevant to their particular occupation, intellectuals--and they alone--claim to know things "in general." By creating "sciences" of human affairs--economic science, political science, sociology--they feel at liberty to dismiss as irrelevant practices and institutions created over millennia by trial and error. It is this philosophical revolution that has transformed some intellectuals into an intelligentsia, actively involved in politics. And, of course, involvement in politics makes them politicians, and, like others of the breed, prone to pursue their private interests in the guise of working for the common good. [p. 23]
Friday, January 9, 2009
Madison's Argument for Bigger Govt
One of my usual arguments when discussing systems of government, and incentive systems more generally, is the notion that smaller is better. A local government will be more in tune with the sentiments of the people, hence have better information compared to a government entrusted with a large populace. Better information will allow the government to more appropriately serve its constituents.
This argument faces some opposition in my current read, (American Creation by Joseph Ellis). Its a passage on James Madison's thoughts as he prepares for the Philadelphia convention of 1787.
I interpret this as nothing other than the age old wisdom of the law of large number, or the diversity principle, whereby increasing the quantity of data reduces expected variation. This is a nice point, but there is a fundamental difference to this view as compared to the conventional view I briefly outlined above. Madison is looking at the negative of government involvement. More specifically the use of government to exact concentrated gains from the unconcentrated whole.
Clearly, this is an excellent point. A very useful way to approach government action. On the other hand it is hardly the whole story. Politicians will always have an incentive to provide valuable services to thier constituents in order to promote re-election. (However, a valuable service must be easily traceable to its source in order for it to be effective. Also, as is all too apparent, politicians may make actions that appear valuable, but in reality are destructive.) The worse the politician's information and the less percentage any single constituent makes up of the whole, the less a citizen can expect his preferences to matter.
Admittedly, this is a big question and I have given an short answer. But, it may reveal a bit of the flavor of this balance/tradeoff.
Which is better, big or small?
This argument faces some opposition in my current read, (American Creation by Joseph Ellis). Its a passage on James Madison's thoughts as he prepares for the Philadelphia convention of 1787.
So Madison reversed the conventional logic [apparently I am one of those conventionalists]. Small republics, like the states, were actually more vulnerable to factional squabbling and sectarian divisions than large republics, because of the larger scale of the enterprise vastly increased the number of competing factions, thereby producing "a greater variety of interests, of pursuits, of passions, which check each other...So an extensive Republic meliorates the administration of a small Republic." It was not just that a fully empowered national government was likely to attract a better class of statesmen more capable of resisting local pressures, though Madison believed that was true too. More fundamentally, an extended American republic, contrary to the prevailing wisdom, made space an asset rather than a liability. (p. 105-106)
I interpret this as nothing other than the age old wisdom of the law of large number, or the diversity principle, whereby increasing the quantity of data reduces expected variation. This is a nice point, but there is a fundamental difference to this view as compared to the conventional view I briefly outlined above. Madison is looking at the negative of government involvement. More specifically the use of government to exact concentrated gains from the unconcentrated whole.
Clearly, this is an excellent point. A very useful way to approach government action. On the other hand it is hardly the whole story. Politicians will always have an incentive to provide valuable services to thier constituents in order to promote re-election. (However, a valuable service must be easily traceable to its source in order for it to be effective. Also, as is all too apparent, politicians may make actions that appear valuable, but in reality are destructive.) The worse the politician's information and the less percentage any single constituent makes up of the whole, the less a citizen can expect his preferences to matter.
Admittedly, this is a big question and I have given an short answer. But, it may reveal a bit of the flavor of this balance/tradeoff.
Which is better, big or small?
Thursday, November 13, 2008
Endowment Effect II
So I had a nice discussion about probability distributions concerning ideas. Unfortunately I failed to make a sufficient connection to answer the question.
How would a method of more accurately accounting for the truth of our beliefs counteract the endowment effect?
Normally when considering the probability of alternatives there are two keys parameters: 1.) probability of outcome's occurrence 2.) magnitude of gain or loss given the outcome's occurrence. In this way widely different "projects" become tradeable. For example, I have $100,000. I am proposed an investment opportunity in the stock market with a 5% chance of earning a 100% and a 95% percent chance of earning 0%. The expected payoff for this investment is simply calculated [.05*($100,000*1.0) + .95($100,000*0.0) = $5,000]. One could compare this opportunity with say an investment in alpaca farms. Let's say this opportunity is expected to yield $20,000 during the same time horizon. Although these are wildly different projects, they are tradeable in the sense that they are alternatives.
In the market of ideas, the value of ideas don't appear to be tradeable. Ideas are limited to the topic, within the topic they are tradeable, cross-topic tradeability doesn't hold. So we have millions of topics (could be policy issues, philosophical questions, scientific inquires, etc) and within each topic are alternatives. Each alternative could theoretically have a truth probability. Hence the only value ideas hold are their truth probability. We could conceivably multiply this probability by some measure of the value of the topic in general, which would enable cross topic idea rankings. But that is not the subject of our inquiry. We are concerned with the appropriate level of confidence in our idea's truth concerning a specific topic.
Back to the endowment effect. What the endowment effect means in this context is that ideas that are "owned" are given too great of value. Hence, their truth probability is ratcheted up too high. My argument is for a method that more accurately understands truth probabilities. I do not attempt to counterbalance the bias but instead hope to improve the mechanism causing the error.
Often I find myself falling into the trap of discrete thinking towards ideas. Either and ideas is "right" or "wrong." 0% or 100% probability. When in reality nothing is so cut and dry, especially on complex issues. This flaw seems a natural shortcut for the mind and may be behind the endowment effect in ideas. (Although we certainly haven't proven there is such an endowment effect, and given its different features it may deserve a different title.) If we can learn to think probabilistically, instead of in all-or-nothing terms, we may realize greater rationality in the domain of ideas.
How would a method of more accurately accounting for the truth of our beliefs counteract the endowment effect?
Normally when considering the probability of alternatives there are two keys parameters: 1.) probability of outcome's occurrence 2.) magnitude of gain or loss given the outcome's occurrence. In this way widely different "projects" become tradeable. For example, I have $100,000. I am proposed an investment opportunity in the stock market with a 5% chance of earning a 100% and a 95% percent chance of earning 0%. The expected payoff for this investment is simply calculated [.05*($100,000*1.0) + .95($100,000*0.0) = $5,000]. One could compare this opportunity with say an investment in alpaca farms. Let's say this opportunity is expected to yield $20,000 during the same time horizon. Although these are wildly different projects, they are tradeable in the sense that they are alternatives.
In the market of ideas, the value of ideas don't appear to be tradeable. Ideas are limited to the topic, within the topic they are tradeable, cross-topic tradeability doesn't hold. So we have millions of topics (could be policy issues, philosophical questions, scientific inquires, etc) and within each topic are alternatives. Each alternative could theoretically have a truth probability. Hence the only value ideas hold are their truth probability. We could conceivably multiply this probability by some measure of the value of the topic in general, which would enable cross topic idea rankings. But that is not the subject of our inquiry. We are concerned with the appropriate level of confidence in our idea's truth concerning a specific topic.
Back to the endowment effect. What the endowment effect means in this context is that ideas that are "owned" are given too great of value. Hence, their truth probability is ratcheted up too high. My argument is for a method that more accurately understands truth probabilities. I do not attempt to counterbalance the bias but instead hope to improve the mechanism causing the error.
Often I find myself falling into the trap of discrete thinking towards ideas. Either and ideas is "right" or "wrong." 0% or 100% probability. When in reality nothing is so cut and dry, especially on complex issues. This flaw seems a natural shortcut for the mind and may be behind the endowment effect in ideas. (Although we certainly haven't proven there is such an endowment effect, and given its different features it may deserve a different title.) If we can learn to think probabilistically, instead of in all-or-nothing terms, we may realize greater rationality in the domain of ideas.
Friday, November 7, 2008
Endowment Effect
The endowment effect is one of many cognitive biases recognized by the behavioral camp of economics. I like the name "ownership bias" better (I think its clearer). In short, the ownership bias results in the placement of greater value in goods that are owed as opposed to ones not owned, but rather out in the marketplace somewhere.
My question is: can this bias be applied to ideas?
Well, first of all, how can we "own" ideas? I'll define the ownership of an idea as the state where an individual moves beyond a stage of fact gathering and has reached a conclusion on the relevant issue.
The consequences of such a bias would be that the individual no longer views all ideas fairly, but disproportionately values the ideas in which he has determined to be "correct" in the past. Such as state would pose a dilemma regarding the individual's hopes for rationality in the consideration of alternative ideas.
Obviously, I haven't tested this. Nor have I checked for similar concepts throughout prior literature. On merely intuitive grounds it seems to make some sense. Observationally, it seems that people exert resistance to changing their minds (of course the variance in resistance must be massive concerning different types of issues). Also when a mind is confronted with unfalsifiable evidence of a mistaken belief, severe cognitive dissonance is the usual symptom. Such a reaction would coincide with the destruction of a large value (the owner's idea) and the gain of a small value (the new idea), resulting in a net pain or loss, even though this new idea must be superior based on the evidence.
With this in mind, how can we restore some semblance of rationality? The first logical answer is to not be hasty towards conclusions. This is good advice, but quite well known and, therefore, boring. At some point for some issues we will inevitably be forced to reach a conclusion. No matter how well thought-through the process of acceptance, according to the ownership bias theory, the end result will be a disproportionate assignment of value to the idea's merit, validity, truth, etc. However, it does stand to reason that on issues where one is relatively uninformed the best policy may be not to reach a conclusion.
Upon reaching a conclusion, a good technique may be to take inventory of the likely truth of your ideas. Assign some sort of probability toward their correctness. It seems that when most utilize this advice they come up with a distribution where a 99% probability is reserved for those ideas of the utmost certainty and a 51% percent probability reserved for issues of a tenuous nature. However, this would only apply to simpler true false questions, where there are only two options. It is important to note that for real world issues there will be infinitely many possible solutions. It is only through the correct framing (e.g. will the human race become extinct in the next 100 years?) that the problem can be circumvented. But this framing often removes much of the importance of the initial inquiry.
The point is that when you consider the probability that your view is correct think of all the possible alternatives. It may be very realistic to hold a 5% chance that your view is correct. This may be a view you have thought very long and hard about. You may be relatively more confident in your correctness on this issue than most of the conclusions you indulge. However, given the complexity of the problem, it is very likely that you are wrong (95% chance). Yet the view you have chosen remains the most likely winner of all possible candidates.
Without resisting the urge to wildly speculate, 5% seems a much more realistic probability than 51% on many social science questions (and the 5% may not even be a relatively tenuous position as the 51% was).
Whether this type of "realistic" probability distribution assignment has the power to overcome the endowment effect or ownership bias, is a question I will not answer.
My question is: can this bias be applied to ideas?
Well, first of all, how can we "own" ideas? I'll define the ownership of an idea as the state where an individual moves beyond a stage of fact gathering and has reached a conclusion on the relevant issue.
The consequences of such a bias would be that the individual no longer views all ideas fairly, but disproportionately values the ideas in which he has determined to be "correct" in the past. Such as state would pose a dilemma regarding the individual's hopes for rationality in the consideration of alternative ideas.
Obviously, I haven't tested this. Nor have I checked for similar concepts throughout prior literature. On merely intuitive grounds it seems to make some sense. Observationally, it seems that people exert resistance to changing their minds (of course the variance in resistance must be massive concerning different types of issues). Also when a mind is confronted with unfalsifiable evidence of a mistaken belief, severe cognitive dissonance is the usual symptom. Such a reaction would coincide with the destruction of a large value (the owner's idea) and the gain of a small value (the new idea), resulting in a net pain or loss, even though this new idea must be superior based on the evidence.
With this in mind, how can we restore some semblance of rationality? The first logical answer is to not be hasty towards conclusions. This is good advice, but quite well known and, therefore, boring. At some point for some issues we will inevitably be forced to reach a conclusion. No matter how well thought-through the process of acceptance, according to the ownership bias theory, the end result will be a disproportionate assignment of value to the idea's merit, validity, truth, etc. However, it does stand to reason that on issues where one is relatively uninformed the best policy may be not to reach a conclusion.
Upon reaching a conclusion, a good technique may be to take inventory of the likely truth of your ideas. Assign some sort of probability toward their correctness. It seems that when most utilize this advice they come up with a distribution where a 99% probability is reserved for those ideas of the utmost certainty and a 51% percent probability reserved for issues of a tenuous nature. However, this would only apply to simpler true false questions, where there are only two options. It is important to note that for real world issues there will be infinitely many possible solutions. It is only through the correct framing (e.g. will the human race become extinct in the next 100 years?) that the problem can be circumvented. But this framing often removes much of the importance of the initial inquiry.
The point is that when you consider the probability that your view is correct think of all the possible alternatives. It may be very realistic to hold a 5% chance that your view is correct. This may be a view you have thought very long and hard about. You may be relatively more confident in your correctness on this issue than most of the conclusions you indulge. However, given the complexity of the problem, it is very likely that you are wrong (95% chance). Yet the view you have chosen remains the most likely winner of all possible candidates.
Without resisting the urge to wildly speculate, 5% seems a much more realistic probability than 51% on many social science questions (and the 5% may not even be a relatively tenuous position as the 51% was).
Whether this type of "realistic" probability distribution assignment has the power to overcome the endowment effect or ownership bias, is a question I will not answer.
Subscribe to:
Posts (Atom)