joke2punchline, punchline2joke: Using a Seq2Seq Neural Network to "Translate" Between Jokes and Punchlines

 
> what do you call an unpredictable chef ?
< ouch .
 

After implementing the seq2seq model, an encoder-decoder network with attention, I wanted to get it to translate between jokes and punchlines. The scripts, pre-trained models, and training data can be found on my GitHub repo.

Model Overview

The underlying model is a PyTorch implementation of the Sequence to Sequence model network, an encoder-decoder network with an attention mechanism. Seq2seq can translate any arbitrary text sequence to any arbitrary text sequence. A more useful application, for example, would be translating English to French or vice versa. For this project, I trained the seq2seq model on question-answer format jokes, so that it can output a punchline given a joke, or output a joke given a punchline.

Results

Overall, the results were somewhat nonsensical, as one might expect. These results are curated by me based on whether or not they made me, at minimum, smile. Yes, I do laugh at my own jokes.

For the following examples, > represents the text input, < represents the model output. I’ve selected examples where the joke or punchline is not directly from the training set, i.e. excluding any output that is simply being regurgitated from the original dataset.

Joke2Punchline

For the following examples, the first line is a fake joke I wrote up using words within the model’s joke vocabulary and fed into the model (>), and the second line is the punchline outputted by the model (<).

 
> what do you call an unpredictable chef ?
< ouch .

> what do you call a pile of pillowcases ?
< screw music

> why was the sun hospitalized ?
< because he was sitting on me .

> what do you call an unhappy star ?
< stay here !

> what do you call an unhappy star ?
< days numbered !

> what is a strawberry s favorite month ?
< a cat !

> who s there ?
< in the dictionary .

> what is red and bad for your teeth ?
< a a gummy bear

> what treasure can you find on a thursday ?
< the lettuce !

> when is a sun a moon ?
< a barber driver

> how many bananas can the moon eat ?
< fish and the train .

> what do you call an upside down egg ?
< the dough

> why was the sun unhappy ?
< because he wanted to tearable time paste !

> what did the skeleton say when they died the wrong year ?
< it march

> how many snails does it take to get to the moon ?
< to the hot hot shakespeare !

> why was the moon crying ?
< because he was on the deck !

> where do sheep go to school ?
< they take the mile bison of course !

> how many emotions does the sun have ?
< he got cents
 

Punchline2Joke

For the following examples, I fed the model fake punchlines, written using words within the model’s punchline vocabulary, and the model outputted a joke that would result in the input punchline. The first line is the fake punchline I fed into the model (>), and the second line is the joke outputted by the model (<).

 
> two parents
< what has four wheels and flies over the world ?

> watermelon concentrate
< when do you stop at green and go at the orange juice factory ?

> cool space
< what do you call an alligator in a vest with a scoop of ice cream ?

> meteor milk
< what do you call a cow that is crossing ?

> one two three four
< what did the buffalo say to the bartender ?

> jalapeno ketchup
< what do you call a boy with no socks on ?

> ice cream salad !
< what did the fish say to the younger chimney ?

> the impossible !
< what did the worker say when he swam into the wall ?

> both !
< what do you call a ghosts mom and dad ?

> pasta party
< what do you call the sound a dog makes ?

> salad party
< what did the buffalo say to the patella ?

> dreams party
< what do you call the sound with a fever ?

> a thesaurus and a dictionary
< what kind of shorts do all spies wear ?

 

Considerations

Training Data

To train the model, I needed a dataset of clean jokes in question-answer text format.

While I did find a dataset of question-answer format jokes, the jokes are scraped from Reddit’s r/jokes subreddit. Going through the file, I did not like most of the jokes at all, as most of them were highly problematic. They were often racist, sexist, queerphobic, etc., and I would rather compile my own than to feed bad data into my model.

One option would be to filter this dataset using a set of “bad” keywords, but trying to filter a heavily biased dataset was less appealing to me than to create a new set entirely. An alternative could be to write a scraper for r/cleanjokes, filtering in only question-answer format jokes, but I didn’t want to invest too much time/energy on this toy project, and I personally am not a fan of using Reddit for training data in general.

I ended up compiling my own small dataset of clean jokes in the question-answer format, consisting of a little over 500 jokes total. A major trade-off was that the model’s vocabulary is relatively limited, but I enjoyed the jokes much more and felt much better about the data I was feeding into the model.

Teacher Forcing

For the joke2punchline and punchline2joke models, the teacher forcing ratio was set to 0.5. I’d be curious to adjust this parameter and see the results. I would expect a lower ratio to result in more nonsensical output, whereas a higher ratio would likely result in more outputs that are directly from the training set.

I think an ideal setup would be to lower the teacher forcing ratio in addition to having a much larger training set.

Possible Extensions

I do think it would be fun to generate jokes and punchlines using an RNN or LSTM before feeding it into these models, such that there is less human intervention (i.e. writing fake jokes/punchlines manually).

I also think the model would be way more fun to play with if it I could train it with a much larger dataset, i.e. 10K+ jokes.

Implementing a Seq2Seq Neural Network with Attention for Machine Translation from Scratch using PyTorch

Continuing with PyTorch implementation projects, last week I used this PyTorch tutorial to implement the Sequence to Sequence model network, an encoder-decoder network with an attention mechanism, used on a French to English translation task (and vice versa). The script, pre-trained model, and training data can be found on my GitHub repo.

In the following example, the first line (>) is the French input, the second line (=) is the English ground truth, and the third line (<) is the resulting English translation output from the model.

 
> je n appartiens pas a ce monde .
= i m not from this world .
< i m not from this world .
 

Model Overview

In this particular PyTorch implementation, the network comprises of 3 main components:

  • an encoder, which encodes the input text into a vector representation. For this project, the encoder is a recurrent neural network using gated recurrent units (GRUs). For each input word, the encoder will output a vector and a hidden state, and uses the hidden state for the next input word.

  • attention, a set of weights that is used during decoding. Attention weights are calculated using a simple feed-forward layer with softmax.

  • a decoder, which takes the encoder output and attention weights to generate a prediction for the next word. In this project, the decoder is a recurrent neural network using GRUs that starts off using the encoder’s last hidden state, which can be interpreted as a context vector for the input, and a start-of-sentence token. For each next word, the decoder uses the attention weights of the current token and the current hidden state to make a prediction with softmax.

The training data comes from the Tatoeba Project and comprises of language pairs within a text file. While this model uses the French-to-English data file, the file can easily be replaced with any other language pair file from the collection. As a caveat, I have not tested this on languages that may use different encodings (e.g. Traditional Chinese, Arabic, etc.)

Results

In the following examples, the first line (>) is the French input, the second line (=) is the English ground truth, and the third line (<) is the resulting English translation output from the model.

Overall, the results are fairly decent considering the small size of the training set, a 9MB text file, or about 1,400 language pairs.

 
> je suis impatient de la prochaine fois .
= i m looking forward to the next time .
< i m looking forward to the next time .

> je n appartiens pas a ce monde .
= i m not from this world .
< i m not from this world .

> il enseigne depuis ans .
= he s been teaching for years .
< he is been for for years .

> tu es sauve .
= you re safe .
< you re safe .

> je ne suis souvent qu a moitie reveille .
= i m often only half awake .
< i m still only sure .

> nous sommes reconnaissantes .
= we re grateful .
< we re contented .

> j en ai marre de garder des secrets .
= i m tired of keeping secrets .
< i m tired of hearing tom s .

> vous etes a nouveau de retour .
= you re back again .
< you re back again .

> il n est pas marie .
= he s not married .
< he s not married .

> je suis responsable des courses .
= i m in charge of shopping .
< i m very one .
 

Potential Extensions

Overall, this project serves mainly as a toy example and could easily be extended for better performance.

  • Training the model on other languages would be relatively straightforward as it would mainly be a matter of switching out the text file used for training data.

  • The embeddings can be replaced with other word embedding approaches, e.g. word2vec or GloVe.

  • There are various approaches for calculating attention weights. This implementation uses softmax. Experimenting with cosine (the angle between vectors) or dot product (considers both the angle and the magnitude for two vectors) could potentially produce different results.

  • Experimenting with longer training times, bigger datasets, and parameter tuning would likely yield better results.

Personally, I’m interested in running this network on translating jokes2punchlines and punchlines2jokes. My next steps are to acquire or compile a dataset of jokes with a question-answer format to train a seq2seq model. :-)