毕业三年

很快我就本科毕业三年了。这三年里生活最大的变化,是接触社科学术圈,又离开社科学术圈。与之相应,离开了典型中国理工科学生圈子,又回到典型中国理工科学生圈子。上周两位青年公知在纽约办讲座(http://nyshalong.com/event/138),题目和我之前做的东西很相关。两年前我会去,这次我不打算去了。也算是显著可见的日常生活的改变的一个例子吧。

这篇文章讲从理转文又回到理工科的经历和感触,围绕三个问题展开。

  1. 为什么念社会学?

我对文科一直有兴趣,从小喜欢看书,语文和外语成绩很好。但是因为家庭环境和教育经历,对文科专业抱有偏见。同时,理科成绩还可以,因此高考前没有考虑文科专业。高考选志愿报的都是工科。

数学成绩一般。委培第二学期选了很难的数学课程,受到打击,觉得自己不适合念理工科。于是调查了一下,打算往文科转。当时问了中文系、哲学系、社会学、人类学的老师,最后自己权衡后,决定念社会学。

本科期间接着修社会学课程,暑假做社会学 RA。期间常抱怨,但也继续往这个方向探索。读了些论文,以及 Andrew Abbott 在芝大的讲话 Welcome to the University of Chicago : http://www.inf.fu-berlin.de/lehre/pmo/eng/Abbott-Education.pdf ,以及 Andreas Glaeser  的讲话:So, how about becoming a poet? https://aims.uchicago.edu/page/2005-andreas-glaeser 很受那种智识的美感吸引。

同时年轻的时候也比较有野心,对了解社会运作有兴趣,想要探索社会规律。

等到本科毕业申请的时候,一则想去名校,社会学竞争相对理工科没有那么激烈。

二则对文科专业的确有好奇,常觉得生活里有困惑没有解除。大二时和后来的社会学导师交流,说不清那种感觉,但感觉她走过的路里可能有答案。

三则 for some unknown reasons,当时很难想象去美国当码农/数据分析员;现在想起来,可能是对可预见的(平凡)生活的恐惧。也就是喜欢新鲜刺激未知的东西,也可能是喜欢抽象化浪漫的东西。小时候喜欢偷偷去游戏机室,初中时幻想和小混混一起打群架混社会,又很喜欢《麦田里的守望者》。高中被管得严没有做太出格的事情。其实大一大二大三对未来也只有遥远的浪漫的规划。年轻的时候,并不是脚踏实地认真耕耘的行事风格。

四则毕业时碰上香港某政治事件,情绪激动。年轻时我一直有点莫名其妙的愤青。社会学对阶级、性别等方面的不平等现象有天然的关切,也对公义、价值观有讨论,吸引年轻人。美国很多社会学者都是1960年代因为 progressive movement 转来念,例如 Erik Olin Wright,Michael Burawoy。华人的例子有赵鼎新,潘毅,以及刘思达。这点观察,伊万塞勒尼在《社会学的三重危机》里也提到过:https://contexts.org/blog/the-triple-crisis-of-sociology/

某名言道(据传是John Adams第一个提起,链接 http://freakonomics.com/2011/08/25/john-adams-said-it-first/),If a person is not a liberal when he is twenty, he has no heart; if he is not a conservative when he is forty, he has no head。目睹自己这两年从 liberal 变成 conservative,我觉得这句话讲得真是太对了!

五则正好有奖学金读书的机会。机缘巧合,也算幸运,选择留在香港念社会学硕士。现在想想,是坑了导师了。

 

2. 为什么不再念社会学?

这个问题之前啰里八嗦过太多次了。其实放眼望去,选择念社科博士的人才是少数。所以不念社科博士其实没啥好奇怪的。你不是也没念吗?

第一,最重要的是我发现对社科研究没有真正的兴趣。第一次见导师,聊选题,我问,什么题目可以比较快地发论文?这并不是合格的科研工作者该问的问题。我没有发明新的社会变革理论/国家-社会框架理论的野心,也没有给 symbolic interaction 理论添砖加瓦的愿望。有些社会学/心理学研究很有意思,比如六度分隔理论,Bourdieu 的区隔理论。但欣赏和创作是两件事情。

第二,我发现之前的向往和困惑,大部分来自于我比较理想化,对精神性的东西有向往。喜欢看的书大部分也都是文艺书籍,并不是社科学术。也和年轻时情绪比较敏感有关,大五人格就是 neuroticism 比较高。这些追求在社会学学术道路里只能 marginally 满足,更为合适的职业目标可能是作家。

第三,同时,我并不喜欢和人辩论,而社科研究、乃至各种科学研究,最终目的都是要说服他人。尤其是政治学,见过的对政治学感兴趣的人要么比较 ambitious 要么老喜欢争论,令我怀疑对政治感兴趣,是否也和对权力感兴趣有关。

第四,社会学的生活方式让我苦恼。我所在的硕士项目是英式风格,注重思维独立性,对研究生基本放养。而我向往被精细控制的生活,高中早上六点到校,晚上九点放学的生活方式让我安心。可以逃避面对自我的责任感。太多时间放在自己手里,让我内疚、难受。同时,职业生活和研究生的确生活节奏不一样。我还是比较喜欢稍强竞争的环境。

第五,社会学的圈子让我不适应。我和周围同学老师的教育背景和思维方式还是不太一样。和定性社会学家/民族志学者几乎没法儿聊,和定量社会学者,人如果受过比较 hardcore 的数理训练,可能都会比较难接受没有那么严谨整洁的论证方式。针对理工背景的申请者转入社科读博这一点,经济学博主 Noahpinion 提供了比较有意思的讨论:http://noahpinionblog.blogspot.com/2014/03/coming-into-econ-from-physics-and-other.htm

第六,学习社会学的过程并没有想象中顺利。基础差,走了不少弯路,也觉得挫败。这个会在下一篇博文里详谈。

第七,从对不公义现象的关切来看,我和社会学圈子的价值观并不完全相符。我自己的价值观也在变化。George Lakoff 曾经在 Moral Politics 里提出理论,说自由派投票者的思维是 nurturant parent model,简单来说就是认为弱势群体的不利处境大部分来自不公平的社会结构,政府有责任保护他们。而保守派是strict father model,认为不幸者的处境大部分来自自身责任,强调个人的自助、自立、自律。我后来发现虽然逻辑上认为自己应该是前者,其实我心底里更认同后者。因此我和真正的“左派”社会学研究者以及活动家并聊不到一起去。

第八,刨除以上一切分析,纯功利角度来看,社会学不提供很好的物质回报。

3. 学到了什么教训?

  • 要做自己真正感兴趣的事情。很遗憾,本科前接触到的教育非常功利,导致没有意识到兴趣导向的重要。高考压力也非常大,导致真的没有时间和精力想这个问题。现在看来,一件事情只有真正感兴趣,才会敢于冒风险,能够坚持。而我目前浅薄的人生经验观察,同辈里优秀的人,都有目标坚定、敢于取舍、踏实、专注和坚持的特点。这些可能都要以兴趣为前提。
  • Exploration vs exploitation 是很重要的决策取舍,Reinforcement learning 里面的最核心的问题。
    • 如果花了太多时间 explore,无法得到最优解。
    • 而如果太早就 exploit,同样无法得到最优解。这个真是人间真理呀!
    • 我大一草草定下了想法转社会学,大学乃至硕士期间也没有及时止损,最后代价惨重。这就是太早 exploit.
    • 而我16年夏决定想转行的时候,又花了太多时间 explore,如果当初坚持一个方向,现在只怕也能省一年时间。
  • 名校崇拜。我当初想去名校,后来虽然的确得到了去美国名校接触顶尖学者的机会,但是名气毕竟是虚的。在西北的三个月,我并不开心,心猿意马,抱怨连天。连累身边人的同时,也糟蹋了老师利用自己 network 争取来的机会。
    • 为什么会有这种名校崇拜?举个例子。高中有位同学分数不够北大正常批次录取,老师百般怂恿他去北大医学院,但他最后选择去复旦金融。我当时不能理解。现在意识到,把清北当成唯一指标的思维模式,最终付出了惨痛代价。
  • 再继续往下说,就会扯到教育资源分配,乃至阶级价值观了。
    • 我为什么去上那个高中?
    • 如果去了别的高中,我还能考上较好的大学吗?
    • 上大学后,我发现很多同学家境比我好。而我和自己的表亲比起来,真是太幸运了。这也是当初选择读社会学,乃至思维偏左的一个原因。法律社会学里有“两个半球理论”,出身声望较低家庭的律师,很多会选择做和公益有关的诉讼,并且最终服务个人客户和小客户,思维偏左。出身特权化的家庭的律师,往往服务富裕、强有力的公司客户。http://www.pkulaw.cn/fulltext_form.aspx?Db=qikan&Gid=1510166689&keyword=&EncodingName=&Search_Mode=

4. 如果你也是社会学学生,在考虑是否要申请美国的社会学博士。本人对劝退文学颇有钻研,收集了以下较有代表性的文章:

  • Graduate School in the Humanities: Just don’t go. https://www.chronicle.com/article/Graduate-School-in-the/44846 作者是名校英文博士毕业,文笔流畅优美,观点一针见血,强烈推荐。比中文互联网上 99% 的讨论要强。这人还写了几篇后续。
  • 针对社会学,orgtheory 的作者(芝大博士,做社会运动,现在似乎是在马里兰教书)有写过一本书 grad school rulz! 全面概括社会学从申请到AP的各种常见问题。关于是否要读博士,他的建议也是否。
  • 100 Reasons NOT to Go to Graduate School 针对社科,有些重复的,但是总的来说很全面   http://100rsns.blogspot.com

 

 

 

Two approaches for logistic regression

Finally, a post in this blog that actually gets a little bit technical …

This post discusses two approaches for understanding of logistic regression: Empirical risk minimizer vs probabilistic approaches.

Empirical Risk Minimizer

Empirical risk minimizing frames a problem in terms of the following components:

  • Input space X \in R^d. Corresponds to observations, features, predictors, etc
  • outcome space Y \in \Omega. Corresponds to target variables.
  • Action space A_R  = R Also called decision function, predictor, hypothesis.
    • A sub element in action space could be hypothesis space: all linear transformations
  • Loss function: l(\hat{y}, y) a loss defined on the predicted values and observed values.

The goal of the whole problem is to select a function mapping $F$ in action space that minimizes the total loss on sample. This is achieved by selecting the value of the parameters in $f$ such that it minimizes the empirical loss in the training set. We also do hyperparameter tuning, which is done on the validation set in order to prevent overfitting.

In a binary classification task:

  • Input space X \in R^d.
  • outcome space Y \in {0, 1}. Binary target values.
  • Action space A_R  = R The hypothesis space: all linear score functions
    • F_{score} = {x \rightarrow x^Tw | w \in R^d}
  • Loss function: l(\hat{y}, y) = l_{logistic}(m) = \text{log} (1 + e ^{-m})
    • This is a kind of margin based loss, thus the m here.
    • Margin is defined as \hat{y} y, which has interpretation in binary classification task. Consider:
      • if m = \hat{y} y > 0 , we know we have our prediction and true value are of the same sign. Thus, in binary classification, we could already get the correct result. Thus, for m > 0 we should have loss = 0.
      • if m = \hat{y} y < 0 , we know we have our prediction and true value are of different signs. Thus, in binary classification, we are wrong. We need to define a positive value for loss function.
        • In SVM, we define hinge loss l(m) = \text{max}(0, 1-m), which is a “maximum-margin” based loss (more on this in the next post, which will cover the derivation of SVM, kernel methods) Basically, for this loss, we have when m \geq 1 no loss, $\latex m < 1$ loss. We can interpret m as “confidence” of our prediction. When m < 1 this means a low confidence, thus still penalize!
    • With this intuition, how do we understand logistic loss? We know:
      • This loss always > 1
      • When m negative (i.e. wrong prediction), we have greater loss !
      • When m positive (i.e. correct prediction), we have less loss…
      • Note also for same amount of increase in m, the scale that we “reward” correct prediction is less than the scale we penalize wrong predictions.

Bournoulli regression with logistic transfer function

  • Input space X = R^d
  • Outcome space y \in {0, 1}
  • action space A = [0, 1] An action is the probability that an outcome is 1

Define the standard logistic function as \phi (\eta) = 1 / (1 + e^{-\eta})

  • Hypothesis space as F = {x \leftarrow\phi (w^Tx) | w \in R^d}
  • Sigmoid function is any function that has an “S” shape. One example is the simple case of logistic function! Used in neural networks as activation function / transfer function. Purpose is to add non-linearity to the network.

Now we need to do a re-labeling for y_i in the dataset.

  • For every y_i = 1, we define y'  = 1
  • For every y_i = -1, we define y'  = 0

Can we do this? Doesn’t this change the value of y-s ? The answer is , in binary classification ( or in any classification), the labels do not matter. Instead, this trick just makes the equivalent shown much easier…

Then, the negative log likelihood objective function, given this F and dataset $laex D$, is :

  • NLL(w) = \sum_i^n [-y_i ' \text{log} \phi (w^T x_i)] +(y_i ' -1) \text{log} (1 - \phi (w^T x_i))

How to understand this approach? Think about a neural network…

  • Input x
  • First linear layer: transform x into w^Tx
  • Next non-linear activation function. \phi (\eta) = 1 / (1 + e^{-\eta}).
  • The output is interpreted as a probability of positive classes.
    • Think about multi-class problems, the second layer is a softmax — and we get a vector of probabilities!

With some calculation, we can show NLL is equivalent to the sum of empirical loss.

 

 

Encoding categorical features: likelihood, one-hot, and feature selection

This post describes techniques used to encode high cardinality categorical features in a supervised learning problem.

In particular, since these values cannot be ordered, the features are nominal. Specifically, I am working with the Kaggle competition here. The problem with this dataset is that some features (e.g. types of cell phone operating systems) are categorical and has hundreds of values.

The problem occurs in how to fit these features in our model. Nominal features work fine with decision trees (random forests), Naive Bayes (use count to estimate pmf). But for other models, e.g. neural networks, logistic regression, the input needs to be numbers.

Before introducing likelihood encoding, we can go over other methods in handling such situations.

Likelihood encoding 

Likelihood encoding is a way of representing the values according to their relationships with the target variable. The goal is finding a meaningful numeric encoding for a categorical feature. Meaningful in this case means as much related to the output/target as possible.

How do we do this? A simple way is 1) first group the training set by this particular categorical feature and 2) representing each value by within group mean of that value. For example, a categorical feature might be gender. Suppose the target is height. Then, we might have the average height for male is 1.70m, while the average height for female is 1.60m. We then change ‘male’ to 1.70, while ‘female’ to 1.60.

Perhaps we should also add some noise to this mean to prevent overfitting to training data. This can be done by :

  • add Gaussian noise to the mean. Credit to Owen Zhang :
  • use the idea of “cross-validation”. So here, instead of using the grand group mean, we use the cross-validation mean. (Not very clear on this point at the moment. Need to examine the idea of cross-validation. Will write in the next post.) Some people propose on Kaggle about using two levels of cross-validation: https://www.kaggle.com/tnarik/likelihood-encoding-of-categorical-features 

One hot vector

This idea is similar to the dummy variable in statistics. Basically, each possible value is being transformed into its own columns. Each of these columns will be a 1 if the original feature equals this value, or 0 if the original feature does not equal this value.

An example is for natural language processing models, the first step is usually 1) tokenize the sentence and 2) constructing a vocabulary and 3) map every token to an index (a number, or a nominal value, basically). After that, we do 4) one hot encoding and 5) a linear transformation in the form of a linear layer in a neural network (basically transform high-dim one hot

After that, we do 4) one hot encoding and 5) a linear transformation in the form of a linear layer in a neural network (basically transform high-dim one hot vectors into low dim vectors). In this way, we are basically representing every symbol in a low dimensional vector. The exact form of the vector is learned. What is happening here is actually dimension reduction.  So, after learning the weighting matrix, other methods, like PCA, can potentially work here as well!

Hashing 

A classmate named Lee Tanenbaum told me about this idea. This is an extension on the one-hot encoding idea. Suppose there can be n values in the feature. Basically, we use two hash functions, hash the possible values into two variables. The first hash all values into \sqrt(n) number of baskets,basketh baskent there are \sqrt(n) number of feature values. All feature values in the same busket is going to be the same for variable A. Then, we use a second hash function, that carefully hash the values into another busket variable B. We want to make sure the combination of A and B can fully represent every possible value in the target feature. We then learn a low-dim representation for both A and B, and concantenate them together.

My opinion on this is, this is still one-hot encoding + weight learning. However, we are forcing certain structures onto the weight matrix.

Feature selection 

Still based on one-hot encoding. However, instead of compressing everything into a tiny low-d vector, we discard some dummy variables based on their importance. In fact, LASSO is exactly used for this! L1 usually drives the coefficient of some features to zero, due to the diamond shape of the constraint. Source:

  • On why l1 gives sparsity: video here :  https://www.youtube.com/watch?v=14MKVkhvMus&feature=youtu.be
  • Course here : https://onlinecourses.science.psu.edu/stat857/book/export/html/137 Statoverflow answer here: https://stats.stackexchange.com/questions/74542/why-does-the-lasso-provide-variable-selection

Domain specific methods

These models exploit the relationship between these symbols in the training set, and learn a vector representation of every symbol (e.g. word2vec). This is certainly another way of vectorizing the words. Probably I will write more about this after learning more on representation learning!

Python generators

This short post describes what is a generator in Python.

A function with yield in it is a function. However, when called the function, it returns a generator object. Generators allow you to pause a function and return an intermediate result. The function saves its execution context and can be resumed later if necessary.

def fibonacci():
    a, b = 0, 1
    while True:
        yield b
        a, b = b, a + b

g = fibonacci()

[next (g) for i in range(10)]

This will return [1, 1, 2, 3, 5, 8, 13, 21, 34, 55].

When call the list comprehension again, it will return:

[next (g) for i in range(10)]

[89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765]

Here, note the function is like a machine that can generate what you want. It will also remember its last state. This is different from a function that returns a list, which will not remember its last state..

Python generators can also interact with the code called with the next method. yield becomes an expression, and a value can be passed along with a new method called sendHere is an example piece of code:

def psychologist():
    print("Please tell me your problem")
    while True:
        answer = (yield) # note the usage of yield here 
        if answer is not None:
            if answer.endswith("?"): # note it's endSwith, the s there
                print("Don't ask yourself too many questions.")
            elif "good" in answer:
                print("A that's good, go on. ")
            elif "bad" in answer:
                print("Don't be so negative.")

This defines a function that can return a generator.

free = psychologist()

type(free)

This will return “generator”

next(free)

This will return “Please tell me your problem.”

free.send("what?")

This will return “Don’t ask yourself too many questions.”

free.send("I'm feeling bad.")

This will return “Don’t be so negative.”

free.send("But she is feeling good!")

This will return, “A that’s good, go on.”

send acts like next, but makes yield return the value passed. The function can, therefore, change its behavior depending on the client code.

Should I use an IDE, or should I use Vim?

This problem has been bugging me for a while so I decide to write it out even though it’s just a short piece.  This post compares tools for Python programming using :

  • Jupyter Notebook
  • IDEs like PyCharm
  • Text editors like Vim

Jupyter Notebook:

  • The pro is it’s easily visualizable. When you want a graph, you can see a graph immediately. The comments are also beautifully formatted.
  • Another pro is it can be connected to a Linux server like Dumbo.
  • The con is, it’s not a program. A notebook is it’s own file and although it can be downloaded as a .py file, the file is usually too long, with lots of comments like typesetting parameters.

When to use it?

  • Data exploration. because of the visualization and analysis nature.
  • Big data. Because it can be connected to a server, that makes running large amount of data possible.

PyCharm

  • The pro is it’s suited for Python development. I have not learnt the functionalities entirely, but e.g. search and replace are easily doable in PyCharm. Debugging also seems to be easy?
  • The con is it’s usually not available on a server.
  • Another con is need extra finger movement when switching from terminal to Pycharm.

When to use it?

  • Debugging complicated programs. e.g. NLP programs.
  • No need to run on a server.

Vim

  • The pro is it’s everywhere. Thus, whenever you write on your own machine, or on the server,   it feels the same.
  • Another pro is it can be used for anything. like python, C++, markdown, bash… So there is no need to switch to other places when ssh to the server.
  • The con is it’s not that easy to use. e.g. search and replace… hard to do this. Adjust tab? also not immediately doable.
  • Another con is it’s not that easy to debug. have to manually print out variables… This makes it particularly difficult when the program is large.

When to use it?

  • When need to connect to a server. e.g. big data size.

Chi-square and two-sample t-test

This post explains a basic question I encountered, and the statistical concepts behind it.

The real-life problem

Someone asks me to construct data to prove that a treatment is useful for 1) kindergarten and 2) elementary school kids in preventing winter cold.

Chi-square and student’s t test

First, decide how to formulate the problem using statistical tests. This includes deciding the quantity and statistic to compare.

Basically, I need to compare two groups. Two tests come to mind: Pearson’s chi-square test, and two-sample t-test. This article summarizes main difference between the two tests, in terms of Null Hypothesis, Types of Data, Variations and Conclusions. The following section is largely based on that article.

Null Hypothesis

  • Pearson’s chi-square test: test the relationship between two variables, or whether something has effects on the other thing (?). e.g. men and women are equally likely to vote for Republican, Democrat, Others, or Not at all. Here the two variables are “gender” and “voting choice”. The null is “gender does not affect voting choice”.
  • Two-sample t-test : whether two sample have the same mean. Mathematically, this means \mu_1 = \mu_2 or \mu_1 - \mu_2 = 0 . e.g. boys and girls have the same height.

Types of Data

  • Pearson’s chi-square test: usually requires two variables. Each is categorical and can have many number of levels. e.g. one variable is “gender”, the other is “voting choice”.
  • two sample t-test: requires two variables. One variable has exactly two levels (two-sample), the other is quantitively calculatable. e.g. in the example above, one variable is gender, the other is height.

Variations

  • Pearson’s chi-square test: variations can be when the two variables are ordinal instead of categorical.
  • two-sample t-test: variations can be that the two samples are paired instead of independent.

Transform the real-life problem into a statistical problem

Using chi-square test

Variable 1 is “using treatment”. Two levels: use or not.

Variable 2 is “getting winter cold”. Two levels: get cold or not.

For kindergarten kids and for pre-school kids, I thus have two 2 * 2 tables.

(question: can I do a chi-square test on three variables? The third one being “age”.)

Using two-sample t-test

Variable 1 is “using treatment”. Two levels: use or not

Variable 2 is supposed to be a numerical variable —- here use disease rate. But then there is no enough number of samples.

Thus, conclude that Chi-square test should be used here. 

 

Brief explanation of statistical learning framework

This post explains what is a statistical learning framework, and common results under this framework.

Problem

We have a random variable X, another random variable Y. Now we want to determine the relationship between X and Y.

We define the relationship by a prediction function f(x). For each x, this function produces an “action” a in the action space.

Now how do we get the predictive function f? We use a loss function l(a, y), that for each a and y, we produce a “loss”. Note since X is a random variable, f(x) is a transformation, so a is a random variable, too.

Also, l(a, y) is a transformation of (a, y), so l(a, y) is a random variable too. It’s distribution is based on both X and Y.

We then calculate f by minimizing the expectation of the loss, which is called “risk”. Since the distribution of l(a, y) is based both on the distribution of X and Y, to get this expectation, we need to do integration both on X and on Y. In the case of discrete variables, we do summation based on the pmf of (x, y).

The above are about theoretical properties of Y, X, loss function and prediction function. But we usually do not know the distribution of (X, Y). Thus, we choose to minimize empirical risk instead. We calculate empirical risk by summing all the empirical loss together, divided by m. (q: does this resemble Monte Carlo method? is this about computational statistics? Need a review.)

Results

In the case of square-loss, we have the result, a = E(y|x).

In the case of 0-1 loss, we have the result, a = arg max P(y|x)

 

Example:

We want to predict a student’s mid-term grade (Y). We want to know the relationship between predicted value, and whether she is hard-working (X).

We use square-loss for this continuous variable Y. Since we know that to minize square loss, for any random variable we should predict the mean value of the variable (c.f. regression analysis, in OLS scenerio we calculate the MSE — but need further connection to this framework).

Now we just observed unfortunately the student is not hard-working.

We know for a not-hardworking student the expectation of mid-term grade is 40.

We then predict the grade to be 40, as a way to minimize square-loss.

 

Probability, statistics, frequentist and Bayesian

This post is a review of basic concepts in probability and statistics.

Useful reference: https://cims.nyu.edu/~cfgranda/pages/DSGA1002_fall15/notes.html

https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/

Probability

It’s a tool to mathematically measure uncertainty.

Formal definition involving \sigma-algebra:

A probability space isa triple (\Omega, F, P) consisting of :

  • A sample space \Omega
  • A set of events F – which will be \sigma-algebra
  • A probability measure P that assigns probabbilites to the events in F.

Example: We have a fair coin. Now we toss it 1000 times, what’s the probability of getting 600 heads or more?

Statistics

The goal of statistics is to 1) draw conclusion from data (e.g. reject Null Hypothesis) and 2) evaluate the uncertainty of this information (e.g. p-value, confidence interval, or posterier distribution).

At the bottem, statistical statement is also about probability. Because it applies probability to draw conclusions from data.

Example: We would like to know whether the probability of raining tomorrow is 0.99. Then tomorrow comes, and it does not rain. Do we conclude that P(rain) = 0.99 is true?

Example 2: We would like to decide if a coin is fair. (Data) Toss the coin 1000 times, and 809 times it’s a head. Do we conclude the coin is fair?

Note : probability is logically self-contained. There are a few rules, and the answers follow from the rules. Statistics can be messy, because it involves draw conclusion from data – much art than science.

Frequentist vs Bayesian

Two schools of statistics. They are different in their interpretation of probability.

Frequentist interpret probability to be the frequencies of events in repeating experiments. E.g. P(head) = 0.6. Then if we toss a coin 1000 times, we will have 600 heads.

Bayesian interprets probability to be a state of knowledge, or a state of belief, about a preposition. E.g. P(head) = 0.6, means we are fairly certain (around 60% certain!) that a coin will be tossed head.

In practice though, Bayesian seldom use a single value to characterize such belief. Rather, it uses a distribution.

Frequentists are used in social science, biology, medicine, public health. We see two sample t-tests, p-values. Bayesian is used in computer science, “big data”.

Core difference between Frequentists and Bayesian

Bayesian considers the results from previous experiments, in the form of a prior.

See this comic for an illustration.

What does it mean?

A frequentist and a Bayesian are making a bet about whether the sun has exploded.

It’s night, so they can not observe.

They ask some expert whether the sun has gone Nova.

They also know that this expert will toss two coins. If both get 6, she will lie. Else, she won’t. (Data generation process)

Now they ask the expert, who tells them yes, the sun has gone Nova.

Frequent conclude that since the probability of getting two 6’s is 1/36 = 0.0027 <0.05 (p < 0.05), it’s very unlikely the expert has lied. Thus, she concludes the expert did not lie. Thus, she concludes that the sun has exploded.

Bayesian, however, has a strong belief that the sun has not exploded (or else they will be dead already). The prior distribution is

  • P(sun has not exploded) = 0.99999999999999999,
  • P(sun has exploded) = 0.00000000000000001.

Now the data generation process is essentially the following distribution:

  • P(expert says sun exploded |Sun not exploded) =  1/36.
  • P(expert says sun exploded |Sun exploded) =  35/36.
  • P(expert says sun not exploded |Sun exploded) =  1/36.
  • P(expert says sun not exploded |Sun not exploded) =  35/36.

The observed data is “expert says sun exploded”. We want to know

  • P( Sun exploded | expert says sun exploded ) = P( expert says sun exploded | Sun exploded) * P( Sun exploded) / P(expert says sun exploded)

Since P(Sun exploded) is extremely small compared to other probabilities, P( Sun exploded | expert says sun exploded ) is also extremely small.

Thus although the expert is unlikely to lie (p = 0.0027), the sun is much more unlikely to have exploded. Thus, the expert most likely lied, and the sun has not exploded.

Literature Review: Enough is enough

Another 6 days passed since I updated my blog – I’m still working on my MPhil thesis.

The problem? I started out too broad. After sending an (overdue) partial draft to the supervisor, she suggested I stop reviewing new literature. I then began wrapping things up.

After drawing the limit of literature, writing suddenly becomes much more easy.

I in fact write faster.

I also read faster. On papers on attitude change, it became easier to identify key arguments and let go of minor ones. On news about background and the history of protests in Hong Kong, it became easier to focus on what and how much is needed for my case. I briefly discussed about types of movement histories in Hong Kong, without going deeper about SMO strategies.

Thus, a lesson might be drawing boundaries is a hard but crucial step.

The lesson might also be having a clear delivery improves efficiency.

For example, this PhD spent 10+ years in his program… And it seemed he had a similar problem. On the surface, it might be procrastination. One level down there is anxiety, shame, guilt and low self-esteem. On level down, this is because of the unclear goals and priorities.

What can I do better to have finished this quicker?

  • Talk to experienced people more often. Drawing boundaries is hard and there is no clearly defined rule. Thus only way is to learn from experience, and let them judge if this is enough! (Tacit knowledge / uninstitutionized knowledge)

 

Literature Review: delete part of your writing might be the solution

I got stuck writing the second half of my literature review in the past few days. This post describes a solution to it.

My initial literature review has something on

  • 1) belief structure — in cultural sociology and cognitive sociology
  • 2) attitude change  — in social psychology
  • 3) political socialization — in political psychology

For a while, I was stuck and did not know why I was stuck. I tried to write on agents of political socialization (family, peers, school, reference group…).

But then I found some literature on undergraduate political socialization, and that leads me to another rather broad field. I then became idle.

As with programming, when you find yourself thinking instead of writing, something is wrong.

After talking to a PhD student, I realized my problem was I was trying to say too many things.

“Belief” / “opinion” / “attitude” / “understandings” are not merely words in social science. They are concepts. So, for each of them, the literature is vast.

I cannot possibly write about both belief and attitude in my literature review, because that would be too much. More, they are not the same thing so they do not hold together.

After deleting all writings about belief structure and cognitive sociology, the literature review becomes much clearer.

A broad lesson is it takes practice to recognize the scope of literature, the theories and what is useful. A good idea is a clear idea.