# 津门大饭店今天煎饼果子师傅回家了

（又是一个 draft，放到博客上有点 git add 的意思，比较公开感觉是个 milestone，又不会发到微信太丢脸。writer’s block 可能需要不要脸才能解决，有空会回来扩写这篇文脉不畅的东西……）

# 又一个迷茫的北美理科生小王准备成为码农

Anyway，我们这个年代身处北美的理工科青年，理想破灭这件事情的发生，通常伴随着如下行为：
– 对亲友和网友疯狂吐槽原专业并深入讨论转 CS 可能性
– 购买各类编程书籍，例如 Learn Python the Hard Way, Head First Java, Problem Solving With C++
– 订阅 LeetCode VIP 账户
– 注册一亩三分地账号，每天视奸老中和小中 offer 并刷面经

# Doctor AI: Using deep learning to automatically diagnoses diseases

(This post is based on a classmate’s and mine project in our first semester at NYU. I’ve been trying to write it into a blog post for three months and today finally decided to remove the task out of my to-do list…)

Everyone have heard about AlphoGo, the hallmark of current artificial intelligence (AI) research that can beat human Go champions.

But how about using AI to automatically diagnoses diseases? Or, at least, automatically narrows down the range of possible diseases and recommend suitable medical consultants? Imagine how much burden this will take off from the always over-worked doctors. Imagine how much benefits this will do to our health care system!

In this blog, we present a machine learning project that automate the identification of relevant diseases, by using “intake clinical notes” to predict the final diagnosis code. From this code, we can then identity which consultants are relevant.

#### Data Source
As with any other current machine learning systems (or really, even with human learning), an algorithm needs to see enough data before it can learn to make smart decisions.

So for our algorithm, we feed it with a public available medical dataset called MIMIC (‘Medical Information Mart for Intensive Care’). This dataset is composed by a group of MIT researchers, and comprises over 40,000 hospital admission notes for more than 30,000 adults and more than 7,000 neonates who are admitted to critical care unites at one hospital.

(It’s freely available at : https://mimic.physionet.org/ in the form of a SQL database if you would like to take a look!)

The complete dataset has rich information, including vital signs, medications, laboratory measurements, observations and notes taken by care providers. However, for our purpose, we only used two variables in the dataset:

– medical discharge summaries for each patients. A discharge summary is a letter written by the
doctor caring for a patient in hospital. It contains important information about the patients
hospital visit, including why they came into hospital, the results of any tests they had, the treatment they received, any changes to their medication, etc.

– the ICD-9 diagnoses for patients. “ICD-9” stands for “International Classification of Diseases, Ninth Revision”. Basically, it is a three digit code for a disease. For example, 428 stands for “heart failure”.

#### Data Transformation / Feature Engineering
One interesting thing about this project is the model needs to learn from text. To tackle this challenge, we used techniques from natural language processing (NLP) to transform text into machine readable inputs – numbers. In the jargons of machine learning, this is also called “feature engineering”.

How to transform text into numbers? We first tried the following methods for data transformation.

* tokenize the input text. This means be break down sentences into sequences of words. For example, ‘She loves dog’ becomes a list of words, (‘She’, ‘loves’, ‘dog’). We used a python library NLTK (“Natural Language Toolkit”) to achieve this.

* turned all text into lower-cases, because we would like to prevent ‘She’ and ‘she’ being recognized as two different words.

* converted all numbers to the generic character \*, so that ‘2017-03-13’ becomes ‘\*\*\*\*-\*\*-\*\*’. This is because evert discharge summary contains at least one date, but numbers are not so useful in our prediction so we just mask the information.

After these steps, we will have every discharge summery turned into a list of words. The total corpus will contain a list of list, with every list being a discharge summary. Not bad!

Our next step would be to turn the lists of words into lists of numbers. For this, the method is called one-hot encoding, in that we construct a dictionary consisting of most common words in the **corpus**, and then replace every word in by its position in the dictionary. Easy!

For example, for this dataset we did the following:

* Construct a vocabulary consisting of words that appeared at least 5 times in the dataset. We basically discard the rare words. After this step, we have a vocabulary of 38,119 unique words.
* Use these 38,119 unique words to index each word in individual list of words. For example, if “the” appeared in the first position, it’s going to be replaced by the number ‘1’.
* After creating the features, we transform each piece of text summary into numbers, by assigning each word (token) with its count in the document.

For a simple example, consider the text: ”harry had had enough”. A Counter object on the tokenized text would yield (’had’: 2, ’harry’: 1, ’enough’: 1). So, this sentence will be transformed into a vector (2, 1, 1).

We also did other preprocess techniques to handle challenges in this dataset, including rate words, out-of-vocabulary words, but these techniques are omitted in this post for simplicity.

#### Model
We used a “deep learning” framework to handle this task. If that sounds pretentious, you are probably right! In fact, “deep learning” is just a fancy name of saying “neural networks with many layers”, while “neural network” is just a fancy way of saying “nested functions”. So nothing mysterious here.

We used a Hierarchical-Attention GRU (HA-GRU) model for this task (Yang, 2016; Baumel 2017). This model’s main characteristics are that 1) it uses a hierarchical model structure to encode at sentence level and document level separately, and 2) it has two levels of attention mechanisms applied at the word and sentence-level. All data manipulation and model training was done in Python 3.6.0, Pytorch 0.2.0.

To understand this model we needs to understand two components: GRU model and attention mechanism. There are some complicated mathematical constructs, but the idea of this post is to describe the main ideas without digging into too much detail, and point to other helpful sources if necessary.

Gated Recurrent Unit (GRU) is a type of Recurrent Neural Network (RNN) models, which is a type of neural networks. Let’s start from the simplest construct: neural networks.

A neural network is a fancy way of saying “function approximation”. For example, suppose we have two variables, $X$ and $Y$. $X$ is a vector of length 3, $Y$ is a vector of length 2.

We would like to discover the relationship between $Y$ and $X$ in the form of $Y = f(X)$.

We might first do a linear transformation on $X$ and get $g(X)$, then throw the $g(X)$ into another non-linear function $z(\cdot)$, and get the final result $z(g(X))$.

So in the end, we will have $Y = f(X) = z(g(X))$.

A neural network model is essentially a graphical way of presenting this mathematical transformation. If we present the previous mathematical formulation into a neural network, it will look like the following:

![image](neural_network.png)

Here is the coorespondence:
– the first three nodes are our $X$, since dimension is three we have three nodes.
– the second two nodes are our transformed hidden vector$h$. The dimension of hidden vector is two, so we have two nodes.
– the final two nodes are our final predicted vector $Y$. The dimension is two, so we also have two nodes.

With the above understanding of simple neural networks, a recurrent neural networks adds autoregression into the formula by passing the output of every simple neural network to the input of the next neural network. The idea is somewhat similar to an autoregressive time series model. Such architecture helps dealing with long term dependencies.

To understand more about neural networks, the following blog posts are excellent resources:
– http://karpathy.github.io/2015/05/21/rnn-effectiveness/
– http://colah.github.io/posts/2015-08-Understanding-LSTMs/

Attention mechanism, on the other hand, is based on recurrent neural networks. The mechanism is multiplying outputs at multiple time stamp by a weight matrix that might or might not be influenced by the output. There are many implementations of attention mechanism, while the following posts also serve as great introduction to this topic:
– https://distill.pub/2016/augmented-rnns/

After understanding recurrent neural networks and attention, the specific architecture of our model – HA-GRU is easy to follow!

Basically, this model is a stack of two recurrent neural networks.
At the first level we encode every sentence in a document into a vector. This is achieved by first embedding words into word vectors, then applying a bi-directional GRU (word GRU) with neural attention mechanism on the embedded words. The output of this step is sequences of sentence vectors.

At the second level, we repeat the process of a recurrent neural network plus attention mechanism, but this time at sentence level.

### Results
We train our HA-GRU model with different parameter configurations of embedding layer, word bi-GRU layer and sentence bi-GRU layer. We use mini-batch gradient descent for numeric optimization.
After first running the model on our full dataset
(over 29,000 summaries and 936 labels), we noticed that, because of the extreme sparsity of the label set, the results were negligible. So we decided to test a smaller label set and finally achieved an accuracy ranging from 0.01-0.04. In other words, our model does not seem to make good predictions.

### Discussion
While our results were not great, the project was still a good exercise for thinking about, and testing how artificial intelligence models can help with medical practices. It also shows that obviously, our Dr. AI has a large room of improvement!

# 毕业三年

1. 为什么念社会学？

2. 为什么不再念社会学？

3. 学到了什么教训？

• 要做自己真正感兴趣的事情。很遗憾，本科前接触到的教育非常功利，导致没有意识到兴趣导向的重要。高考压力也非常大，导致真的没有时间和精力想这个问题。现在看来，一件事情只有真正感兴趣，才会敢于冒风险，能够坚持。而我目前浅薄的人生经验观察，同辈里优秀的人，都有目标坚定、敢于取舍、踏实、专注和坚持的特点。这些可能都要以兴趣为前提。
• Exploration vs exploitation 是很重要的决策取舍，Reinforcement learning 里面的最核心的问题。
• 如果花了太多时间 explore，无法得到最优解。
• 而如果太早就 exploit，同样无法得到最优解。这个真是人间真理呀！
• 我大一草草定下了想法转社会学，大学乃至硕士期间也没有及时止损，最后代价惨重。这就是太早 exploit.
• 而我16年夏决定想转行的时候，又花了太多时间 explore，如果当初坚持一个方向，现在只怕也能省一年时间。
• 名校崇拜。我当初想去名校，后来虽然的确得到了去美国名校接触顶尖学者的机会，但是名气毕竟是虚的。在西北的三个月，我并不开心，心猿意马，抱怨连天。连累身边人的同时，也糟蹋了老师利用自己 network 争取来的机会。
• 为什么会有这种名校崇拜？举个例子。高中有位同学分数不够北大正常批次录取，老师百般怂恿他去北大医学院，但他最后选择去复旦金融。我当时不能理解。现在意识到，把清北当成唯一指标的思维模式，最终付出了惨痛代价。
• 再继续往下说，就会扯到教育资源分配，乃至阶级价值观了。
• 我为什么去上那个高中？
• 如果去了别的高中，我还能考上较好的大学吗？
• 上大学后，我发现很多同学家境比我好。而我和自己的表亲比起来，真是太幸运了。这也是当初选择读社会学，乃至思维偏左的一个原因。法律社会学里有“两个半球理论”，出身声望较低家庭的律师，很多会选择做和公益有关的诉讼，并且最终服务个人客户和小客户，思维偏左。出身特权化的家庭的律师，往往服务富裕、强有力的公司客户。http://www.pkulaw.cn/fulltext_form.aspx?Db=qikan&Gid=1510166689&keyword=&EncodingName=&Search_Mode=

4. 如果你也是社会学学生，在考虑是否要申请美国的社会学博士。本人对劝退文学颇有钻研，收集了以下较有代表性的文章：

• Graduate School in the Humanities: Just don’t go. https://www.chronicle.com/article/Graduate-School-in-the/44846 作者是名校英文博士毕业，文笔流畅优美，观点一针见血，强烈推荐。比中文互联网上 99% 的讨论要强。这人还写了几篇后续。
• 针对社会学，orgtheory 的作者（芝大博士，做社会运动，现在似乎是在马里兰教书）有写过一本书 grad school rulz! 全面概括社会学从申请到AP的各种常见问题。关于是否要读博士，他的建议也是否。
• 100 Reasons NOT to Go to Graduate School 针对社科，有些重复的，但是总的来说很全面   http://100rsns.blogspot.com

# Two approaches for logistic regression

Finally, a post in this blog that actually gets a little bit technical …

This post discusses two approaches for understanding of logistic regression: Empirical risk minimizer vs probabilistic approaches.

Empirical Risk Minimizer

Empirical risk minimizing frames a problem in terms of the following components:

• Input space $X \in R^d$. Corresponds to observations, features, predictors, etc
• outcome space $Y \in \Omega$. Corresponds to target variables.
• Action space $A_R = R$ Also called decision function, predictor, hypothesis.
• A sub element in action space could be hypothesis space: all linear transformations
• Loss function: $l(\hat{y}, y)$ a loss defined on the predicted values and observed values.

The goal of the whole problem is to select a function mapping $F$ in action space that minimizes the total loss on sample. This is achieved by selecting the value of the parameters in $f$ such that it minimizes the empirical loss in the training set. We also do hyperparameter tuning, which is done on the validation set in order to prevent overfitting.

• Input space $X \in R^d$.
• outcome space $Y \in {0, 1}$. Binary target values.
• Action space $A_R = R$ The hypothesis space: all linear score functions
• $F_{score} = {x \rightarrow x^Tw | w \in R^d}$
• Loss function: $l(\hat{y}, y) = l_{logistic}(m) = \text{log} (1 + e ^{-m})$
• This is a kind of margin based loss, thus the $m$ here.
• Margin is defined as $\hat{y} y$, which has interpretation in binary classification task. Consider:
• if $m = \hat{y} y > 0$, we know we have our prediction and true value are of the same sign. Thus, in binary classification, we could already get the correct result. Thus, for $m > 0$ we should have loss = 0.
• if $m = \hat{y} y < 0$, we know we have our prediction and true value are of different signs. Thus, in binary classification, we are wrong. We need to define a positive value for loss function.
• In SVM, we define hinge loss $l(m) = \text{max}(0, 1-m)$, which is a “maximum-margin” based loss (more on this in the next post, which will cover the derivation of SVM, kernel methods) Basically, for this loss, we have when $m \geq 1$ no loss, $\latex m < 1$ loss. We can interpret $m$ as “confidence” of our prediction. When $m < 1$ this means a low confidence, thus still penalize!
• With this intuition, how do we understand logistic loss? We know:
• This loss always > 1
• When $m$ negative (i.e. wrong prediction), we have greater loss !
• When $m$ positive (i.e. correct prediction), we have less loss…
• Note also for same amount of increase in $m$, the scale that we “reward” correct prediction is less than the scale we penalize wrong predictions.

Bournoulli regression with logistic transfer function

• Input space $X = R^d$
• Outcome space $y \in {0, 1}$
• action space $A = [0, 1]$ An action is the probability that an outcome is 1

Define the standard logistic function as $\phi (\eta) = 1 / (1 + e^{-\eta})$

• Hypothesis space as $F = {x \leftarrow\phi (w^Tx) | w \in R^d}$
• Sigmoid function is any function that has an “S” shape. One example is the simple case of logistic function! Used in neural networks as activation function / transfer function. Purpose is to add non-linearity to the network.

Now we need to do a re-labeling for $y_i$ in the dataset.

• For every $y_i = 1$, we define $y' = 1$
• For every $y_i = -1$, we define $y' = 0$

Can we do this? Doesn’t this change the value of $y$-s ? The answer is , in binary classification ( or in any classification), the labels do not matter. Instead, this trick just makes the equivalent shown much easier…

Then, the negative log likelihood objective function, given this $F$ and dataset $laex D$, is :

• $NLL(w) = \sum_i^n [-y_i ' \text{log} \phi (w^T x_i)] +(y_i ' -1) \text{log} (1 - \phi (w^T x_i))$

How to understand this approach? Think about a neural network…

• Input $x$
• First linear layer: transform $x$ into $w^Tx$
• Next non-linear activation function. $\phi (\eta) = 1 / (1 + e^{-\eta})$.
• The output is interpreted as a probability of positive classes.
• Think about multi-class problems, the second layer is a softmax — and we get a vector of probabilities!

With some calculation, we can show NLL is equivalent to the sum of empirical loss.

# Encoding categorical features: likelihood, one-hot, and feature selection

This post describes techniques used to encode high cardinality categorical features in a supervised learning problem.

In particular, since these values cannot be ordered, the features are nominal. Specifically, I am working with the Kaggle competition here. The problem with this dataset is that some features (e.g. types of cell phone operating systems) are categorical and has hundreds of values.

The problem occurs in how to fit these features in our model. Nominal features work fine with decision trees (random forests), Naive Bayes (use count to estimate pmf). But for other models, e.g. neural networks, logistic regression, the input needs to be numbers.

Before introducing likelihood encoding, we can go over other methods in handling such situations.

Likelihood encoding

Likelihood encoding is a way of representing the values according to their relationships with the target variable. The goal is finding a meaningful numeric encoding for a categorical feature. Meaningful in this case means as much related to the output/target as possible.

How do we do this? A simple way is 1) first group the training set by this particular categorical feature and 2) representing each value by within group mean of that value. For example, a categorical feature might be gender. Suppose the target is height. Then, we might have the average height for male is 1.70m, while the average height for female is 1.60m. We then change ‘male’ to 1.70, while ‘female’ to 1.60.

Perhaps we should also add some noise to this mean to prevent overfitting to training data. This can be done by :

• add Gaussian noise to the mean. Credit to Owen Zhang :
• use the idea of “cross-validation”. So here, instead of using the grand group mean, we use the cross-validation mean. (Not very clear on this point at the moment. Need to examine the idea of cross-validation. Will write in the next post.) Some people propose on Kaggle about using two levels of cross-validation: https://www.kaggle.com/tnarik/likelihood-encoding-of-categorical-features

One hot vector

This idea is similar to the dummy variable in statistics. Basically, each possible value is being transformed into its own columns. Each of these columns will be a 1 if the original feature equals this value, or 0 if the original feature does not equal this value.

An example is for natural language processing models, the first step is usually 1) tokenize the sentence and 2) constructing a vocabulary and 3) map every token to an index (a number, or a nominal value, basically). After that, we do 4) one hot encoding and 5) a linear transformation in the form of a linear layer in a neural network (basically transform high-dim one hot

After that, we do 4) one hot encoding and 5) a linear transformation in the form of a linear layer in a neural network (basically transform high-dim one hot vectors into low dim vectors). In this way, we are basically representing every symbol in a low dimensional vector. The exact form of the vector is learned. What is happening here is actually dimension reduction.  So, after learning the weighting matrix, other methods, like PCA, can potentially work here as well!

Hashing

A classmate named Lee Tanenbaum told me about this idea. This is an extension on the one-hot encoding idea. Suppose there can be n values in the feature. Basically, we use two hash functions, hash the possible values into two variables. The first hash all values into \sqrt(n) number of baskets,basketh baskent there are \sqrt(n) number of feature values. All feature values in the same busket is going to be the same for variable A. Then, we use a second hash function, that carefully hash the values into another busket variable B. We want to make sure the combination of A and B can fully represent every possible value in the target feature. We then learn a low-dim representation for both A and B, and concantenate them together.

My opinion on this is, this is still one-hot encoding + weight learning. However, we are forcing certain structures onto the weight matrix.

Feature selection

Still based on one-hot encoding. However, instead of compressing everything into a tiny low-d vector, we discard some dummy variables based on their importance. In fact, LASSO is exactly used for this! L1 usually drives the coefficient of some features to zero, due to the diamond shape of the constraint. Source:

• On why l1 gives sparsity: video here :  https://www.youtube.com/watch?v=14MKVkhvMus&feature=youtu.be
• Course here : https://onlinecourses.science.psu.edu/stat857/book/export/html/137 Statoverflow answer here: https://stats.stackexchange.com/questions/74542/why-does-the-lasso-provide-variable-selection

Domain specific methods

These models exploit the relationship between these symbols in the training set, and learn a vector representation of every symbol (e.g. word2vec). This is certainly another way of vectorizing the words. Probably I will write more about this after learning more on representation learning!

# Python generators

This short post describes what is a generator in Python.

A function with yield in it is a function. However, when called the function, it returns a generator object. Generators allow you to pause a function and return an intermediate result. The function saves its execution context and can be resumed later if necessary.

def fibonacci():
a, b = 0, 1
while True:
yield b
a, b = b, a + b

g = fibonacci()

[next (g) for i in range(10)]


This will return [1, 1, 2, 3, 5, 8, 13, 21, 34, 55].

When call the list comprehension again, it will return:

[next (g) for i in range(10)]


[89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765]

Here, note the function is like a machine that can generate what you want. It will also remember its last state. This is different from a function that returns a list, which will not remember its last state..

Python generators can also interact with the code called with the next method. yield becomes an expression, and a value can be passed along with a new method called sendHere is an example piece of code:

def psychologist():
while True:
answer = (yield) # note the usage of yield here
if answer.endswith("?"): # note it's endSwith, the s there
print("Don't ask yourself too many questions.")
print("A that's good, go on. ")
print("Don't be so negative.")


This defines a function that can return a generator.

free = psychologist()

type(free)


This will return “generator”

next(free)


free.send("what?")


This will return “Don’t ask yourself too many questions.”

free.send("I'm feeling bad.")


This will return “Don’t be so negative.”

free.send("But she is feeling good!")


This will return, “A that’s good, go on.”

send acts like next, but makes yield return the value passed. The function can, therefore, change its behavior depending on the client code.

# Should I use an IDE, or should I use Vim?

This problem has been bugging me for a while so I decide to write it out even though it’s just a short piece.  This post compares tools for Python programming using :

• Jupyter Notebook
• IDEs like PyCharm
• Text editors like Vim

Jupyter Notebook:

• The pro is it’s easily visualizable. When you want a graph, you can see a graph immediately. The comments are also beautifully formatted.
• Another pro is it can be connected to a Linux server like Dumbo.
• The con is, it’s not a program. A notebook is it’s own file and although it can be downloaded as a .py file, the file is usually too long, with lots of comments like typesetting parameters.

When to use it?

• Data exploration. because of the visualization and analysis nature.
• Big data. Because it can be connected to a server, that makes running large amount of data possible.

PyCharm

• The pro is it’s suited for Python development. I have not learnt the functionalities entirely, but e.g. search and replace are easily doable in PyCharm. Debugging also seems to be easy?
• The con is it’s usually not available on a server.
• Another con is need extra finger movement when switching from terminal to Pycharm.

When to use it?

• Debugging complicated programs. e.g. NLP programs.
• No need to run on a server.

Vim

• The pro is it’s everywhere. Thus, whenever you write on your own machine, or on the server,   it feels the same.
• Another pro is it can be used for anything. like python, C++, markdown, bash… So there is no need to switch to other places when ssh to the server.
• The con is it’s not that easy to use. e.g. search and replace… hard to do this. Adjust tab? also not immediately doable.
• Another con is it’s not that easy to debug. have to manually print out variables… This makes it particularly difficult when the program is large.

When to use it?

• When need to connect to a server. e.g. big data size.

# Chi-square and two-sample t-test

This post explains a basic question I encountered, and the statistical concepts behind it.

The real-life problem

Someone asks me to construct data to prove that a treatment is useful for 1) kindergarten and 2) elementary school kids in preventing winter cold.

Chi-square and student’s t test

First, decide how to formulate the problem using statistical tests. This includes deciding the quantity and statistic to compare.

Basically, I need to compare two groups. Two tests come to mind: Pearson’s chi-square test, and two-sample t-test. This article summarizes main difference between the two tests, in terms of Null Hypothesis, Types of Data, Variations and Conclusions. The following section is largely based on that article.

Null Hypothesis

• Pearson’s chi-square test: test the relationship between two variables, or whether something has effects on the other thing (?). e.g. men and women are equally likely to vote for Republican, Democrat, Others, or Not at all. Here the two variables are “gender” and “voting choice”. The null is “gender does not affect voting choice”.
• Two-sample t-test : whether two sample have the same mean. Mathematically, this means $\mu_1 = \mu_2$ or $\mu_1 - \mu_2 = 0$. e.g. boys and girls have the same height.

Types of Data

• Pearson’s chi-square test: usually requires two variables. Each is categorical and can have many number of levels. e.g. one variable is “gender”, the other is “voting choice”.
• two sample t-test: requires two variables. One variable has exactly two levels (two-sample), the other is quantitively calculatable. e.g. in the example above, one variable is gender, the other is height.

Variations

• Pearson’s chi-square test: variations can be when the two variables are ordinal instead of categorical.
• two-sample t-test: variations can be that the two samples are paired instead of independent.

Transform the real-life problem into a statistical problem

Using chi-square test

Variable 1 is “using treatment”. Two levels: use or not.

Variable 2 is “getting winter cold”. Two levels: get cold or not.

For kindergarten kids and for pre-school kids, I thus have two 2 * 2 tables.

(question: can I do a chi-square test on three variables? The third one being “age”.)

Using two-sample t-test

Variable 1 is “using treatment”. Two levels: use or not

Variable 2 is supposed to be a numerical variable —- here use disease rate. But then there is no enough number of samples.

Thus, conclude that Chi-square test should be used here.

# Brief explanation of statistical learning framework

This post explains what is a statistical learning framework, and common results under this framework.

Problem

We have a random variable X, another random variable Y. Now we want to determine the relationship between X and Y.

We define the relationship by a prediction function f(x). For each x, this function produces an “action” a in the action space.

Now how do we get the predictive function f? We use a loss function l(a, y), that for each a and y, we produce a “loss”. Note since X is a random variable, f(x) is a transformation, so a is a random variable, too.

Also, l(a, y) is a transformation of (a, y), so l(a, y) is a random variable too. It’s distribution is based on both X and Y.

We then calculate f by minimizing the expectation of the loss, which is called “risk”. Since the distribution of l(a, y) is based both on the distribution of X and Y, to get this expectation, we need to do integration both on X and on Y. In the case of discrete variables, we do summation based on the pmf of (x, y).

The above are about theoretical properties of Y, X, loss function and prediction function. But we usually do not know the distribution of (X, Y). Thus, we choose to minimize empirical risk instead. We calculate empirical risk by summing all the empirical loss together, divided by m. (q: does this resemble Monte Carlo method? is this about computational statistics? Need a review.)

Results

In the case of square-loss, we have the result, a = E(y|x).

In the case of 0-1 loss, we have the result, a = arg max P(y|x)

Example:

We want to predict a student’s mid-term grade (Y). We want to know the relationship between predicted value, and whether she is hard-working (X).

We use square-loss for this continuous variable Y. Since we know that to minize square loss, for any random variable we should predict the mean value of the variable (c.f. regression analysis, in OLS scenerio we calculate the MSE — but need further connection to this framework).

Now we just observed unfortunately the student is not hard-working.

We know for a not-hardworking student the expectation of mid-term grade is 40.

We then predict the grade to be 40, as a way to minimize square-loss.