Should I use an IDE, or should I use Vim?

This problem has been bugging me for a while so I decide to write it out even though it’s just a short piece.  This post compares tools for Python programming using :

  • Jupyter Notebook
  • IDEs like PyCharm
  • Text editors like Vim

Jupyter Notebook:

  • The pro is it’s easily visualizable. When you want a graph, you can see a graph immediately. The comments are also beautifully formatted.
  • Another pro is it can be connected to a Linux server like Dumbo.
  • The con is, it’s not a program. A notebook is it’s own file and although it can be downloaded as a .py file, the file is usually too long, with lots of comments like typesetting parameters.

When to use it?

  • Data exploration. because of the visualization and analysis nature.
  • Big data. Because it can be connected to a server, that makes running large amount of data possible.


  • The pro is it’s suited for Python development. I have not learnt the functionalities entirely, but e.g. search and replace are easily doable in PyCharm. Debugging also seems to be easy?
  • The con is it’s usually not available on a server.
  • Another con is need extra finger movement when switching from terminal to Pycharm.

When to use it?

  • Debugging complicated programs. e.g. NLP programs.
  • No need to run on a server.


  • The pro is it’s everywhere. Thus, whenever you write on your own machine, or on the server,   it feels the same.
  • Another pro is it can be used for anything. like python, C++, markdown, bash… So there is no need to switch to other places when ssh to the server.
  • The con is it’s not that easy to use. e.g. search and replace… hard to do this. Adjust tab? also not immediately doable.
  • Another con is it’s not that easy to debug. have to manually print out variables… This makes it particularly difficult when the program is large.

When to use it?

  • When need to connect to a server. e.g. big data size.

Chi-square and two-sample t-test

This post explains a basic question I encountered, and the statistical concepts behind it.

The real-life problem

Someone asks me to construct data to prove that a treatment is useful for 1) kindergarten and 2) elementary school kids in preventing winter cold.

Chi-square and student’s t test

First, decide how to formulate the problem using statistical tests. This includes deciding the quantity and statistic to compare.

Basically, I need to compare two groups. Two tests come to mind: Pearson’s chi-square test, and two-sample t-test. This article summarizes main difference between the two tests, in terms of Null Hypothesis, Types of Data, Variations and Conclusions. The following section is largely based on that article.

Null Hypothesis

  • Pearson’s chi-square test: test the relationship between two variables, or whether something has effects on the other thing (?). e.g. men and women are equally likely to vote for Republican, Democrat, Others, or Not at all. Here the two variables are “gender” and “voting choice”. The null is “gender does not affect voting choice”.
  • Two-sample t-test : whether two sample have the same mean. Mathematically, this means \mu_1 = \mu_2 or \mu_1 - \mu_2 = 0 . e.g. boys and girls have the same height.

Types of Data

  • Pearson’s chi-square test: usually requires two variables. Each is categorical and can have many number of levels. e.g. one variable is “gender”, the other is “voting choice”.
  • two sample t-test: requires two variables. One variable has exactly two levels (two-sample), the other is quantitively calculatable. e.g. in the example above, one variable is gender, the other is height.


  • Pearson’s chi-square test: variations can be when the two variables are ordinal instead of categorical.
  • two-sample t-test: variations can be that the two samples are paired instead of independent.

Transform the real-life problem into a statistical problem

Using chi-square test

Variable 1 is “using treatment”. Two levels: use or not.

Variable 2 is “getting winter cold”. Two levels: get cold or not.

For kindergarten kids and for pre-school kids, I thus have two 2 * 2 tables.

(question: can I do a chi-square test on three variables? The third one being “age”.)

Using two-sample t-test

Variable 1 is “using treatment”. Two levels: use or not

Variable 2 is supposed to be a numerical variable —- here use disease rate. But then there is no enough number of samples.

Thus, conclude that Chi-square test should be used here. 


Brief explanation of statistical learning framework

This post explains what is a statistical learning framework, and common results under this framework.


We have a random variable X, another random variable Y. Now we want to determine the relationship between X and Y.

We define the relationship by a prediction function f(x). For each x, this function produces an “action” a in the action space.

Now how do we get the predictive function f? We use a loss function l(a, y), that for each a and y, we produce a “loss”. Note since X is a random variable, f(x) is a transformation, so a is a random variable, too.

Also, l(a, y) is a transformation of (a, y), so l(a, y) is a random variable too. It’s distribution is based on both X and Y.

We then calculate f by minimizing the expectation of the loss, which is called “risk”. Since the distribution of l(a, y) is based both on the distribution of X and Y, to get this expectation, we need to do integration both on X and on Y. In the case of discrete variables, we do summation based on the pmf of (x, y).

The above are about theoretical properties of Y, X, loss function and prediction function. But we usually do not know the distribution of (X, Y). Thus, we choose to minimize empirical risk instead. We calculate empirical risk by summing all the empirical loss together, divided by m. (q: does this resemble Monte Carlo method? is this about computational statistics? Need a review.)


In the case of square-loss, we have the result, a = E(y|x).

In the case of 0-1 loss, we have the result, a = arg max P(y|x)



We want to predict a student’s mid-term grade (Y). We want to know the relationship between predicted value, and whether she is hard-working (X).

We use square-loss for this continuous variable Y. Since we know that to minize square loss, for any random variable we should predict the mean value of the variable (c.f. regression analysis, in OLS scenerio we calculate the MSE — but need further connection to this framework).

Now we just observed unfortunately the student is not hard-working.

We know for a not-hardworking student the expectation of mid-term grade is 40.

We then predict the grade to be 40, as a way to minimize square-loss.


Probability, statistics, frequentist and Bayesian

This post is a review of basic concepts in probability and statistics.

Useful reference:


It’s a tool to mathematically measure uncertainty.

Formal definition involving \sigma-algebra:

A probability space isa triple (\Omega, F, P) consisting of :

  • A sample space \Omega
  • A set of events F – which will be \sigma-algebra
  • A probability measure P that assigns probabbilites to the events in F.

Example: We have a fair coin. Now we toss it 1000 times, what’s the probability of getting 600 heads or more?


The goal of statistics is to 1) draw conclusion from data (e.g. reject Null Hypothesis) and 2) evaluate the uncertainty of this information (e.g. p-value, confidence interval, or posterier distribution).

At the bottem, statistical statement is also about probability. Because it applies probability to draw conclusions from data.

Example: We would like to know whether the probability of raining tomorrow is 0.99. Then tomorrow comes, and it does not rain. Do we conclude that P(rain) = 0.99 is true?

Example 2: We would like to decide if a coin is fair. (Data) Toss the coin 1000 times, and 809 times it’s a head. Do we conclude the coin is fair?

Note : probability is logically self-contained. There are a few rules, and the answers follow from the rules. Statistics can be messy, because it involves draw conclusion from data – much art than science.

Frequentist vs Bayesian

Two schools of statistics. They are different in their interpretation of probability.

Frequentist interpret probability to be the frequencies of events in repeating experiments. E.g. P(head) = 0.6. Then if we toss a coin 1000 times, we will have 600 heads.

Bayesian interprets probability to be a state of knowledge, or a state of belief, about a preposition. E.g. P(head) = 0.6, means we are fairly certain (around 60% certain!) that a coin will be tossed head.

In practice though, Bayesian seldom use a single value to characterize such belief. Rather, it uses a distribution.

Frequentists are used in social science, biology, medicine, public health. We see two sample t-tests, p-values. Bayesian is used in computer science, “big data”.

Core difference between Frequentists and Bayesian

Bayesian considers the results from previous experiments, in the form of a prior.

See this comic for an illustration.

What does it mean?

A frequentist and a Bayesian are making a bet about whether the sun has exploded.

It’s night, so they can not observe.

They ask some expert whether the sun has gone Nova.

They also know that this expert will toss two coins. If both get 6, she will lie. Else, she won’t. (Data generation process)

Now they ask the expert, who tells them yes, the sun has gone Nova.

Frequent conclude that since the probability of getting two 6’s is 1/36 = 0.0027 <0.05 (p < 0.05), it’s very unlikely the expert has lied. Thus, she concludes the expert did not lie. Thus, she concludes that the sun has exploded.

Bayesian, however, has a strong belief that the sun has not exploded (or else they will be dead already). The prior distribution is

  • P(sun has not exploded) = 0.99999999999999999,
  • P(sun has exploded) = 0.00000000000000001.

Now the data generation process is essentially the following distribution:

  • P(expert says sun exploded |Sun not exploded) =  1/36.
  • P(expert says sun exploded |Sun exploded) =  35/36.
  • P(expert says sun not exploded |Sun exploded) =  1/36.
  • P(expert says sun not exploded |Sun not exploded) =  35/36.

The observed data is “expert says sun exploded”. We want to know

  • P( Sun exploded | expert says sun exploded ) = P( expert says sun exploded | Sun exploded) * P( Sun exploded) / P(expert says sun exploded)

Since P(Sun exploded) is extremely small compared to other probabilities, P( Sun exploded | expert says sun exploded ) is also extremely small.

Thus although the expert is unlikely to lie (p = 0.0027), the sun is much more unlikely to have exploded. Thus, the expert most likely lied, and the sun has not exploded.

Literature Review: Enough is enough

Another 6 days passed since I updated my blog – I’m still working on my MPhil thesis.

The problem? I started out too broad. After sending an (overdue) partial draft to the supervisor, she suggested I stop reviewing new literature. I then began wrapping things up.

After drawing the limit of literature, writing suddenly becomes much more easy.

I in fact write faster.

I also read faster. On papers on attitude change, it became easier to identify key arguments and let go of minor ones. On news about background and the history of protests in Hong Kong, it became easier to focus on what and how much is needed for my case. I briefly discussed about types of movement histories in Hong Kong, without going deeper about SMO strategies.

Thus, a lesson might be drawing boundaries is a hard but crucial step.

The lesson might also be having a clear delivery improves efficiency.

For example, this PhD spent 10+ years in his program… And it seemed he had a similar problem. On the surface, it might be procrastination. One level down there is anxiety, shame, guilt and low self-esteem. On level down, this is because of the unclear goals and priorities.

What can I do better to have finished this quicker?

  • Talk to experienced people more often. Drawing boundaries is hard and there is no clearly defined rule. Thus only way is to learn from experience, and let them judge if this is enough! (Tacit knowledge / uninstitutionized knowledge)


Literature Review: delete part of your writing might be the solution

I got stuck writing the second half of my literature review in the past few days. This post describes a solution to it.

My initial literature review has something on

  • 1) belief structure — in cultural sociology and cognitive sociology
  • 2) attitude change  — in social psychology
  • 3) political socialization — in political psychology

For a while, I was stuck and did not know why I was stuck. I tried to write on agents of political socialization (family, peers, school, reference group…).

But then I found some literature on undergraduate political socialization, and that leads me to another rather broad field. I then became idle.

As with programming, when you find yourself thinking instead of writing, something is wrong.

After talking to a PhD student, I realized my problem was I was trying to say too many things.

“Belief” / “opinion” / “attitude” / “understandings” are not merely words in social science. They are concepts. So, for each of them, the literature is vast.

I cannot possibly write about both belief and attitude in my literature review, because that would be too much. More, they are not the same thing so they do not hold together.

After deleting all writings about belief structure and cognitive sociology, the literature review becomes much clearer.

A broad lesson is it takes practice to recognize the scope of literature, the theories and what is useful. A good idea is a clear idea.


I will probably applying for psychology grad programs. And I will write my first chapter of literature like this

/This post described 1) my past few days’ work since Christmas ( not very much) and 2) my change of future plans and reasons and 3) my progress on MPhil thesis literature review chapter.

My last few days work

After Christmas break productivity decreased (from ~ 11 hour per day to ~ 8 then to ~ 5). Hypothesis seems to be validated that a structured environment and external accountability is important.

After coming back to my old office in Hong Kong, my productivity immediately decreased at least 70%. Truly amazing. Possible explanations: 1) jet lag. I woke up at 4am the day before, slept at 4pm, then woke up 10 pm that day. In-between I experienced periods of alternating hallucination (no kidding) and absent-mindedness. I sing along Youtube videos for 2 hours early this morning (3am – 5am). Weird.

Explanation 2) this environment arose certain negative emotions associated with past experience. Whenever I sit at my old desk I found it hard to concentrate, falling back to old patterns. In a new place where I had not worked before I felt much better.

My changed future plans and why 

On another note, to the readers of this blog, I very likely will apply for PhD programs in psychology next year. Continuing working on my master’s thesis (will talk about it shortly) makes me realize my past struggle started from misaligned interest with my supervisor, my department, ethnography, and sociology in general.

The question I was interested in was in fact studied more by psychologists, and I am much satisfied with their approach to the question (belief formation, political attitude formation, etc.)

This is the reason I struggled for so long in my study. I first read about social movement when I was at UW-Madison. Then in my first few months in MPhil in 2015, I found some literature in public opinion research close to what I would like to know, although they are not directly on protests. Protest scholars on the other hand are not that interested in opinions per se – they are interested in opinions (grievances) as an independent variable in explaining emergence of protests and participation.

That’s the first time I was confused. I treated the literature review process as finding answers, but I found that the answer did not exist. Mistake 1: For research, this is supposed to be a good thing (research gap), but I did not realize at that time. Because of the habit of an undergraduate, I just thought if I did not find an answer that meant I didn’t work hard! (Two years later, in 2017 I wrote to Pam Oliver, then a social movement mail list she was in, then  John Josh asking them for literature – and indeed there is none from the angle I would like to know. )

Back to Nov 2015. With supervisor’s pointer I went to methodological literature (Grounded theory, Methods of Discovery, two books by Howard Becker – find them not that helpful). I dislike my ethnography class from week 1. Complained to supervisors but decided to hang on to it (while secretly hate it). Then during Chinese New Year 2016 I read about  necessarity of narratives / contingencies , which lead to some reading in the sociology of science – combined with a previous RA task in tacit knowledge— basically literature justifying why interpretative sociology / narrative methods in historical sociology are useful. I did not like the arguments, becasue I found the concepts vague and the logic shabby. Mistake 2: did not take action immediately due to fear of authority.

Then it’s March 2016 or so. I went to sit in at Xiang Biao’s anthropology class. Read some beautiful anthropology, but they are not relevant to my research. Foucault write about biopolitics, power / knowledge which led to some efforts in vain. Probably around the same time still trying to make sense of ethnography, so researched Alice Goffman’s On the Run and Sida Liu’s doctoral thesis. Not much progress on thesis.

In the mean time, read into cultural sociology (cognitive sociology?) but did not find a fit. Cultural sociology concerns culture / action, but 1) I’m not interested in culture and 2) I’m not interested in action.

In the meantime I read Andreas Glaeser’s book political epistemics recommended by my supervisor. We seemed to agree using this framework. This books reads further lead to phenomenology and social otology. Nobody cites the book back then. In 2017 my supervisor published a paper using this book. Now I think back, I do not really like or believe the theory. I tried to read so many times, and I will use the theory in my thesis for graduateion. But if I’m being uttly honest, I don’t understand why the theory is good and I do not feel comfortable using this theory. I don’t know why.

By this time I’m basically disillusioned and disenchanted with sociology and academia. Around May or Jun I began seriously considering a career change. Mistake 3: In facing a problem, not thinking about solutions but want to run away!

For around 12 months I did not seriously work on my thesis. In Nov and Dec I did a whole round of data coding, but not much writing. Thinking back, this is typical procrastination. In 2017 I started to learn about machine learning and coding. Read briefly into cognitive sociology, ideology… Late April to July, internship. Mistake 4: poor time management skills and no priority.

Reflecting this experience. The root problem is my literature review always felt uncomfortable, so I can not formulate a research question using any of the directions I had searched. I felt uncomfortable because belief formation is a problem in social psychology not sociology, which I realized pretty late.

Thus I will proabably applying for grad schools in psychology next year.

I will write my literature review draft this way

The literature review will have two chapters: why do people have different understandings of the same social movement, and how do they form these understandings.

Why question is tricky. I imagine myself talking to scholars in sociology on protests. Thus I should cite them. But they have not studied the problem directly. The closest I can find is 1) relative deprivation theory and 2) framing and 3) social identity and 4) system justification. The end goal of all these theories is explain participation. But I can use them to say, because some have feelings of relative deprivation while others’ don’t.

Then a natural question is, what leads to perceptions of relative deprivation? Here, sociology stops and is fine with describing the understandings themselves (meaning-making). Psychologists answer this, one theory is frustraction-aggression, the other is cognitive dissonance. These two theories are what I really wanted to know.

I will do the same for framing, social identity and system justification. The part of them that describes psychological process is useful, while the other part on how psychological process leads to participation is not useful. Lesson 1: how to selectively use a theory when it is not a good fit but it’s the only thing I can find. (I might not be doing this right though.)

How do they form these understandings? This part will likely be in political socialization, pursuation, attitude change. Still quite broad. Lesson 2: When reading about a new field for a question, do not be taken away by their arguments. One paper leads to another and this needs to be controled.

I developed a standard workflow that I currently find helpful:

  • Read one highly cited document. Any paper contain the key words. Better be a review paper.
  • Use the Harvard thesis guide literature review template, trace the theory history. Note key sources.
  • Write down 5 ~ 7 key sources biblography information.
  • Download these papers. Do not download anything more than these 5 ~ 7. Keep the most important papers! (Dr. Tian had once taught me to find the citation list. Use these with highest citations.)
  • Take notes and read these 5 ~ 7 papers using the Harvard thesis guide template.
  • Print out the reading notes. Select the top priority papers again.

This is like preventing over-fitting in machine learning. Because both deals with the trade-off between generalziation and spceific problems. The best strategy is thus early stopping – leave some of the specific data points unvisited for better generalization.


Updates 2018 Mar 13 : I gave up with this idea after taking a course in psychology. Reason is that I am not really interested in their debates.

Debugging larger pyspark ML programs

This post describes my learning experience in developing larger programs, especially those :

  • Takes a long time to run – due to big data sets and computationally intensive algorithms
  • Requires developing locally and on HPC. That is cannot be solved in a Python IDE

The take away is:

  • To save time, try writing scripts in one place only.
  • Do not develop interactive and then paste everything to an editor!

The problem

I found myself spending excessive time (~ 4 days) developing a program that should be simple in its logic. Basically, I just need to call Pyspark API functions to do classification on 20Newsgroup dataset. It should be a simple program!

How did I spend my time?

  • First day, I found a script doing a similar task. When I tried to use it I came into the following problems:
    • Read files on HDFS. Spark only works with files on HDFS not local file system. This mistakes took me some time, as I thought the problem was with syntax in reading nested folders.
    • The script does not clean text – remove headers, numbers, punctuations. To do this, I had to understand the .trans function. This also works differently in Python2 and Python3, which took me some time to realize.
    • The script use Pyspark SQL, thus took some time to learn DataFrame as well.
  • By day 2 the data is read into Spark DataFrame.
    • I then had problem calling functions from MLLib, because I didn’t realize ML and MLlib are two libraries and have different data structures. I then came into problem when using an MLlib function to ML data.
    • I also tried to convert the data structure back and forth, from RDD to Labeled Point to ML.
    • To inspect what is in the data, I also spent time calling the wrong functions, or transform everything into an RDD and call map functions.
  • By day 3 I intend to use Spark-submit on HPC. The main task is to learn to use editor.
    • Because someone told me I should be using editor instead of debugging interactively, or else I cannot see code structure, I began to learn vim. That took a morning or so (!).
  • By day 4, I am trying to clean up the code and write functions.
    • This creates another level of complexity. One of the bug is I forget to update the function argument


  • I type every line of code three to four times. First on my workstation, then on the server’s pyspark shell. Finally I copy the code into a script. This not only creates space for mistakes, but also is inefficient.
    • I did this because I am not comfortable writing scripts on HPC yet.
    • Also because I am not comfortable debugging with a script and an editor, without using IDE.
  • I input every line of code at least three to four times.
    • First in interactive mode.
    • Then, copying the line into my editor. Since the project spans across days, every time I start again I need to re-read the files!
    • Also, I then creates a file on a small dataset on HPC and run that.
    • After that, I run the file again on a larger dataset.
  • Since I am debugging at multiple places, I need to do version control as well.
    • I remember scp back and forth sending files along. Whenever I edited something, I remove the older version of the program at the other place.
  • I am not familiar with data structure and functions on Pyspark. This also leads to waste of time.
  • Interactive mode of Spark is slow. I once forgot to cut the dataset smaller, and running one command (e.g. transform) on the whole dataset takes 10 minues! If there are 20 commands like this that would be ~3 hours.


  • The above problems can be summarized as :
    • I need to write script on multiple places: server and local.
      • Solution: learn to use editor in the server. e.g. Vim
    • Submit program interactively vs in batch. I do not know how to debug with a script so I have to use interactive mode to make sure I know what I am doing. But debugging interactively means double the amount of typing because every command needs to go through the terminal and the editor. That’s also running the program twice.
      • Solution: learn to run the program from the script. The con of this is need to laod data multiple times. Also, use a smaller dataset so data loading will not be a pain. 
    • Time spent on start-up. The data file is huge, thus re-loading it takes time. If every time load the dataset is 3 minutes, load it 20 times would be 60 minutes.
      • Solution: use a smaller sample dataset for developing.

Moral: try to write only one set of programs in a single place.

  • For pyspark, I can only use HPC. So I just write on HPC.
  • Pyspark also cannot use pdb.
  • For Python, I should test everything locally first. There are two final programs
    • A script that can run locally and on HPC.
    • A script to be submitted to the cluster.

Debugging philosophy

  • “Bugs” are not bugs but errors. The responsibility lies with the programmer. A program that has errors is simple wrong, because 1) the programmer is not really familiar with the rules and grammars of the library. Used the wrong data structure, used the wrong function call. etc.
  • Debugging is a learning experience.  Why does debugging takes lots of time? Because the programmer is learning something new, thus need time to try and make mistakes. There seems to be a wishful thinking that a perfect program will magically began working by itself, and thus time will be well spent. It wont’, because learning always takes time!

Pyspark ML vs MLLib

A post that summarizes main difference between Pyspakr ML and MLlib. This is based on Spark 2.2.0 and Python 3.

Data Structure

  • pyspark.mllib is the older library for machine learning. It can only use RDD labeled point. But then more features than ML
  • newer library for machine learning. It can only use sql DataFrame. But easier to construct a ML pipeline.
  • It is recommended to use DataFrame APIs (i.e. ML) because it is more stable. Also this is newer.


The two libraries seem to be similar in terms of feature selection APIs.


  • implements Pipeline workflow, which is based on dataFrame and can be used to “quickly assemble and configure practical machine learning pipelines”.

Classification algorithms provided by MLlib:

Classification algorithm by ML:

Seems that ML has multilayerPerceptron which is not in MLLib.


  • pyspark ml implements while pyspark.mllib does not
  • However Pyspark ML’s evaluation function is difficult to use.  The docs are not clear. Thus, changing back to MLlib to use this.

When to use which?

  • Efficiency: MLLib uses Labelpoint RDD, while ML uses structured DataFrame. Thus, “if the data is already structured DataFrames, then ML confer some performance benefits over RDD’s, this is apparently drastic as the complexity of your operations increase.”
  • Resource: DataFrames consume far less memory when caching than RDDs. Thus lower level operations RDD’s are great but high level operations, viewing and typing with other API’s use DataFrames. (Above all quote Stackoverflow user Grr)
  • It is recommended to use ML and DataFrame. However if want to use Gridsearch, seems still need to revert to Labeled point!

Reference :

A stackoverflow question:

MLlib doc:

ML doc:

Comparison of data structures:


Clean Text in Python3

A pain in the ass. This post summarizes “best” approaches to clean text data in Python3.

It will not cover depreciated syntax in Python2. For example string.maketans has a different usage in python2 — it is not discussed here.

Is that a string or a Unicode?

Reference here.

When you see a string in python2, there are two possibilities:

  • ASCII strings: Every character in a string is a byte. Look up the hexadecimal value in the ASCII table.
  • Unicode strings: every character in a string is one or more than one byte. Look up the hexiadecimal value in the Unicode table – there are many of them. The most popular one is UTF-8.
    • Example: 猫 is represened in three bytes in Unicode. when Python2 reads this, it gots it wrong – it thinks there are 5 characters but there is in fact just three… As python2 use ascii to decode.
      • To produce the correct representation, use x.decode(‘utf-8’)

String vs sequence of bytes in Python2

  • String is a sequence of Unicode codepoints… they are abstract concepts and cannot be stored on disk. They are a sequence of characters.
  • bytes are actual numbers… they can be stored on disk.
  • Anything has to be mapped to a byte to be stored on a computer
  • To map a codepoint to a byte, use Unicode encoding
  • To convert a byte to a string, use decoding .

String vs sequences of bytes in Python3

  • In sum, in Python 3, str represents a Unicode string, while the bytes type is a sequence of bytes. This naming scheme makes a lot more intuitive sense.

Encode vs Decode

  • To represent a unicode string as a string of bytes is known as encoding

Remove punctuation

The best answer (I think) from Stackoverflow :

import string

# Thanks to Martijn Pieters for this improved version

# This uses the 3-argument version of str.maketrans
# with arguments (x, y, z) where 'x' and 'y'
# must be equal-length strings and characters in 'x'
# are replaced by characters in 'y'. 'z'
# is a string (string.punctuation here)
# where each character in the string is mapped
# to None
translator = str.maketrans('', '', string.punctuation)

# This is an alternative that creates a dictionary mapping
# of every character from string.punctuation to None (this will
# also work)
#translator = str.maketrans(dict.fromkeys(string.punctuation))

s = 'string with "punctuation" inside of it! Does this work? I hope so.'

# pass the translator to the string's translate method.

The code above removes punctuation and just delete it.

Replace punctuation with a blank

This method uses regular expression… I find it to be better than using a translator!

import re
re.sub(r'[^\w]', ' ', text)

Dealing with all kinds of blanks

Again borrowing from this awesome stackoverflow post…

# Remove leading and ending spaces, use str.strip():
sentence = ' hello  apple'
>>>'hello  apple'

# Remove all spaces, use str.replace():

sentence = ' hello  apple'
sentence.replace(" ", "")
>>> 'helloapple'

# Remove duplicated spaces, use str.split(), then join the words together

sentence = ' hello  apple'
" ".join(sentence.split())
>>>'hello apple'

Summary of workflow:

  1. Decide if that’s a string or a unicode, or a sequence of bytes. This will decide whether need to encode or not. Ultimately we want a string, i.e. “str” type.
  2. Import re module. MHOP this is the most convenient one to remove punctuations.
    1. re.sub can replace all numbers and punctuations with blanks. Thus, will not join two unrelated words together if they are connected by punctuations.
  3. Use s.lower() to change a string to lower case… Note here s is a string! not a unicode object.
  4. Use s.strip() to strip out the excessive blanks!
  5. Use . join to join the list of words together with only one blank between them, if needed.