Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
41 views
25 pages
Ernest wk1
Uploaded by
Tiamiyu Hamzah
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save ernest-wk1 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
41 views
25 pages
Ernest wk1
Uploaded by
Tiamiyu Hamzah
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save ernest-wk1 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 25
Search
Fullscreen
n3i2022 NLP- Week 1 - Projectipynb - Colaboratory DUPLICATE THIS COLAB TO START WORKING ON IT. Using File > Save a copy to drive. Week 1: Text Classification What are we building Welll continue to apply our learning philosophy of repetition as we build multiple classification models of increasing complexity in the following order: 1. Average of Word2Vec + MLP Layer 2. Can we concatenate 3 token embeddings and then average them? Does this do better than, the previous method? 3, Build an embedding layer based model. 4, Extension: Explore different parameters, features and architectures. Evaluation Welll be evaluating our models on the following metric: 1. Accuracy: is the ratio of the number of correctly classified instances to the total number of instances 2. Extension: this is a multi-class classification problem, visualize a confusion matrix of N*N of actual class vs predicted class (N = number of classes). Instructions 1. We've provide scaffolding for all the boiler plate PyTorch code to get to our first model. This, covers downloading and parsing the dataset, training code for the baseline model. Make sure to read all the steps and internalize what is happening. 2. At this point our model gets to an accuracy of about 0.32. After this welll try to improve the model by using sliding windows of text instead of just one word at a time. Does this improve accuracy? 3, The third model we're going to build is an embedding layer based model. Here instead of using pre-trained word-embeddings we'll be creating new vectors as part of the training process. How do you think this model will perform? 4, Extension: We've suggested a bunch of extensions to the project so go crazy, tweak any parts of the pipeline and see if you can beat all the current modes. Code Overview hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue sranar022 NLP- Week 1-Projectipynb- Colboratory + Dependencies: Python dependencies and loading the spacy model * Project Dataset: Download the conversation dataset and parse it into a pytorch Dataset Trainer: Trainer function to help with multi-epoch training Model 1: Simple Word2Vec + MLP model Model 2: Sliding window trigram (Word2Vec) Model 3: Embedding bag based model on Trigram * Extensions Dependencies ¥> Now let's get started and to kick things off as always we install some dependencies. 1 %Kcapture 2 # Install all the required dependencies for the project 3 !pip install pytorch-lightning==1.5.18 spacy==2.2.4 4 Ipython -m spacy download en_core_web_md Import all the necessary libraries we need throughout the project. 1 from sklearn.preprocessing import LabelEncoder 2 from torch import nn 3 from torch.nn.utils.rnn import pad_sequence 4 from torch.utils.data import DataLoader, Dataset, random_split 5 from collections import Counter 6 import en_core_web_md 7 import numpy as np 8 import pytorch_lightning as pl 9 import spacy 10 import torch 11 import torch.nn.functional as F 12 import torchmetrics First things first, let's load the Spacy data which comes with pre-trainined embeddings. This process is expensive so only do this once 1 # Really expensive operation to load the entire space word-vector index in memory 2# We'll only run it once 3 loaded_spacy_model = en_core_web_md.load() hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 205anar022 NLP- Week 1-Projectipynb- Colboratory Fix the random seed for numpy and pytorch so the entire class gets consistent results which we 1 # Fix the random seed so that we get consistent results this might not be the most compr 2 torch.manual_seed(@) 3 np.random.seed(@) Classifier Project > Let's Begin & Data Loading and Processing (Common to ALL Solutions) Dataset Well be using the Empathetic Dialogs dataset open-sourced by Facebook (link). It can be downloaded as a tar ball from the following link A-sample row from the dataset: conv_id, utterance_idx, context, prompt, speaker_idx,utterance, selfeval, tags hit:12388_conv:24777,1, joyful, felt overcome with emotions when Christmas came around as a kid,437,C The three columns we'll primarily focus on are: 1. contex\ > emotion we're trying to predict 2. prompt + utterance ==> We'll combine these sentences and use them as input But let's download and explore the dataset and these should automatically get clear. 1 from google.colab import drive 2 drive.mount ("/content/drive/") Mounted at /content/drive/ 1 import os 2 os. listdin("/content/drive/MyDrive/Corise_NLP/Week1") [‘empatheticdialogues.tar.gz.2', “enpatheticdialogues.tar.gz', “enpatheticdialogues.tar.gz.1", 'NLP- Week 1 - Word2Vec - Tutorial.ipynb", ‘classification’, ‘Lightning logs", 'NLP- Week 1 - Project. ipynb'] hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 312532022 1 1 1 1 2 3 4 5 6 7 8 9 18 uu 12 13 14 15 16 7 18 19 20 21 22 23 NLP- Week 1 - Projectipynb - Colaboratory os. chdir("/content/drive/MyDrive/Corise_NLP/Week1") os.getcwd() * /content /drive/MyDrive/Corise_NLP/week1' os.Listdir() [‘empatheticdialogues.tar.gz.2', “enpatheticdialogues.tar.gz', “enpatheticdialogues.tar.gz.1", 'NLP- Week 1 - Word2Vec - Tutorial.ipynb", ‘classification’, ‘Lightning logs’, 'NLP- Week 1 - Project. ipynb'] import tarfile import os import csv DIRECTORY_NAME="classification ‘TRAIN_FILE="classification/empatheticdialogues/train.csv" VALIDATION FILE="classification/empatheticdialogues/valid.csv" ‘TeST_FILE="classification/empatheticdialogues/test.csv" def download_dataset(): Download the dialog dataset. The tarball contains three file train.csv, valid.csv, tes Iwget ‘https://d1. fbaipublicfiles.com/parlai/empatheticdialogues/empatheticdialogues.tar if not os.path.isdir(OTRECTORY_NAME) : Imkdir classification tar = tarfile.open(*enpatheticdialogues. tar.gz") tar.extractall (DIRECTORY_NAME) tar.close() # Expensive operation so we should just do this once download_dataset() =-2022-07-16 00:31:50-- _https://d1. fbaipublicfiles.com/parlai /empatheticdialogues/empai Resolving dl.fbaipublicfiles.com (dl.fbaipublicfiles.com)... 172.67.9.4, 104.22.75.142, Connecting to dl.fbaipublicfiles.com (dl. fbaipublicfiles.con) |172.67.9.4| :443... connect HITP request sent, awaiting response... 200 OK Length: 28022709 (27M) [application/gzip] Saving to: ‘empatheticdialogues.tar.gz.3’ empatheticdialogues 100%[ ] 26.72M 20.6MB/s in 1.3 2022-07-16 00:31:52 (20.6 MB/s) - “empatheticdialogues.tar.gz.3’ saved [28022709/280227 hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue ans32022 NLP- Week 1 - Projectipynb - Colaboratory | , Now the question is that did it do the right thing? Time to find out. 1 import glob 2 glob. glob( f"{DIRECTORY_NAME}/**/*.csv", recursive=True) [ ‘classification/empatheticdialogues/test.csv', ‘classification/empatheticdialogues/valid.csv' ‘classification/empatheticdialogues/train.csv'] Cool we see all our files. Let's poke at one of them before we start parsing our dataset. Lwith open(TRAIN FILE, ‘r', newline="\n') as file: 2 reader = csv.reader(file, delimiter = ',") 3B 1-0 4 while(i < 5): 5 print(next(reader)) 6 i4et ['conv_id', ‘utterance_idx', ‘context’, ‘prompt’, ‘speaker_idx', ‘utterance’, ‘selfeval [‘hit:@_conv:1", '1', ‘sentimental’, ‘I remember going to the fireworks with my best fr: ["hit:@_conv:1", '2", ‘sentimental’, ‘I remember going to the fireworks with my best fr: [‘hit:@_conv:1", ‘3', ‘sentimental’, ‘I remember going to the fireworks with my best fr: [‘hit:@_conv:1", '4", ‘sentimental’, ‘I remember going to the fireworks with my best fr: The set of columns we care about are: 1. context ==> emotion we're trying to predict 2. prompt + utterance ==> We'll combine these sentences and use them as input Parse the dataset file and create a label encoder that converts text labels to integer ids or vice versa 1 def parse_dataset(file_path, label_encoder): 2 3 Function to parse the csv into training or test dataset 4 5 Input: Tuple[conv_id,utterance_idx, context, prompt, speaker_idx, utterance, selfeval, tags] 6 Output: Tuple[label, merge sentences in the conversation] 7 8 data = [] 9 with open(file_path, ‘r', newline='\n') as file: 1@ reader = csv.reader(file, delimiter = ',") 1 # This skips the first row of the CSV file. 12 next(reader) hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 92532022 3B 14 15 16 7 18 NLP- Week 1 - Projectipynb - Colaboratory for row in reader: # This is a bad row if it is missing any of the entries if len(row) continue # Append the entry into the list of data points. data. append((label_encoder([row[2]])[@], row[3] + 19 return data 28 21 22 # A lable encoder converts the text labels into integer ids 23 def 2" 2" get_label_encoder(): + row[5])) # prompt + [one spa ‘Get all the labels in a dataset and return two maps that convert labels -> id or vice 26 # We pass an identity encoder since we still need the raw labels to train the label enco 27° raw_data = parse dataset (TRAIN FILE, lambda x: x) 28 29 Ile print("raw_data[@]", raw_data[@]) LabelEncoder() #from sklearn preprocessing 30 le.fit([x[@] for x in raw_data]) 31 return le 32 33 # Global variables used throughout the notebook 34 label_encoder = get_label_encoder() Las't 2 prin e+ t(a) te st 1 dir( [ Label_encoder) _class_', Tdelattr_', Tact, Tair doc", a format", —ee_', Tgetattribute_', —eetstate_', et hash_ init subelass_', te it, module", new", ‘Treduce_', hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue Treduce_ex_', repr",32022 NLP. We 3K 1 = Projectipynb - Colsboratory ‘_setattr_', ‘setstate_’ ‘sizeof Tstr_y Tsubelasshook_', weakref_", Tcheck_feature_nanes", Tcheck_n_features', ‘Tget_param_names', ‘Tget_tags', ‘Tmore_tags', Trepr_html_*, Trepr_html_inner* , ‘Trepr_mimebundle_*, “validate data’, ‘classes_', ‘fit’, ‘Fit_transform', ‘get_parans', ‘inverse_transform', ‘set_parans', ‘transform’ ] 1 label_encoder.classes_ array(['afraid', ‘angry', ‘annoyed’, ‘anticipating’, ‘anxious’, ‘apprehensive’, ‘ashamed’, ‘caring’, ‘confident’, ‘content’, ‘devastated’, ‘disappointed’, ‘disgusted’, ‘embarrassed’, ‘excited’, ‘faithful’, ‘furious’, ‘grateful’, ‘guilty’, ‘hopeful’, ‘impressed’, ‘jealous’, "joyful', ‘lonely’, ‘nostalgic’, ‘prepared’, ‘proud’, ‘sad’, ‘sentimental’, ‘surprised’, ‘terrified’, ‘trusting’ ], dtype="
»>Today_comma_as i was leaving for work in the morning_conma_i h 34 # print (type(spacy_doc)) 35 # [print(token.text, "END") for token in spacy_doc] hitpseolab research google comidrivalWeqychRkze-2Lx4yGlwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 1612532022 NLP- Week 1 = Projectipynb- Colaboratory # print("self.parans.n_grams", self.parans.n_grams) >>>3 # print([token.text for token in spacy_doc]) n_grans = self.n_gram([token for token in spacy_doc], self.parans.n_grans) #(2) aprint("n_grams", n_grams) #>>>[[Today_comma_as, i, was], [i, was, leaving], [was, lea n_gran_vector = [np.concatenate(np.array([token.vector for token in n_gram]), axis=0) #print(len(n_gram_vector[@])) #900 # print("len(n_gran_vector)", len(n_gram_vector)) #len(n_gram_vector) 56 if n_gram_vector == []: sentence_vector = np.zeros((self.parans.n_grans * self.params.word_vec_dinension,)) else: sentence_vector = np.nean(np.array(n_gram_vector), axis=0) #(4) atry: sentence_vector = np.mean(np.array(n_gram_vector), axis=0) #(4) sprint (Len(sentence_vector[9])) except: print ("n_gram_vector", n_gran_vector) >>»Error: TypeError: ‘nunpy.floatea’ object fisentence_vector = np.zeros((self.params.n_grams * self.parans.word_vector_dim,)) #print(sentence_vector) sentence_tokens = list([token.text for token in spacy_doc]) #(5) # print("sentence_tokens:", len(sentence_tokens), “sentence_vector:", len(sentence_vec return sentence_vector, sentence_tokens output2 = trainer(mode: lordVectorClassificationModel(# Observe the change in input parame HParamsSpacy.word_vec_dimension * HP HParamsSpacy.num_classes), params-HParamsSpacy, vectorizer=SpacyChunkVectorizer (HParamsSpacy)) hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 72s32022 1 2 3 4 5 6 7 8 9 18 ete 12 GPU available: TPU available: IPU available: | Name model accuracy NLP- Week 1 - Projectipynb - Colaboratory False, used: False False, using: @ TPU cores False, using: @ IPUs Type | Parans WordVectorClassificationModel | 279 K Accuracy le # A better way would be creating a function y_test = [] y_pred = [] for batch_results in output2: y_test .extend(list(batch_results["Labels' -numpy())) y_pred. extend (list (batch_results[ "predictions" ].numpy())) cm = confusion_matrix(y_test, y_pred) enp = ConfusionMatrixDisplay (cn, display_labels-Label_encoder. inverse_transform(list(set(y, fig, ax = plt.subplots(figsize=(15,15)) cmp.plot(ax=ax) plt.xticks(rotatio hitpseolab research google comidrivalWeqycbRkze-2Lx4yGlwU_EiCokPalamT 2usp=sharing#scrollTo=ACGWEpVUY7nABpriniMode=ttue 1812532022 NLP- Week 1 - Projectipynb - Colaboratory (array([ ®, 1, 2, 3, 4, 5, 6 7, 8, 9, 18, 11, 12, 13, 14, 45, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3, 31]),
) Assignment Part: 2 - Model 3: EmbeddingBag -- TO BE COMPLETED Expected accuracy: ~43% The third model we're going to build is an embedding layer based model. Here instead of using pre- trained word-embeddings welll be creating new vectors as part of the training process. How do you think this model will perform? Implementation has the following steps: 1. get_char_trigram_token_map: Implement a map that returns top nun_tokens token in the corpus to some allocated ids between 1 to num_tokens . Here are some steps that should help with the implementation. 1. Compute a frequency map of the num_tokens most common character trigrams in the training data, Note: We're now moving away from words and moving to just using characters directly. 2. Add an extra key called "UNK" to capture unknown tokens. 3, Create unique integer ids for all these tokens 1...N 2. Vectorizer: Implement a new vectorizer for each sentence that does the following 1. Get all trigrams for the sentence 2. Get id for every trigram 3, For missing trigrams in the map mark them as “UNK” 4. Append all the ids into a list and that is your sentence vector 3, Forward pass of the model: Implement the forward pass of the model 1. Pass the input batch through the embedding layer 2. Pass the output of the embedding layer into our linear layer ipsselab research google come MqyebRkze-2Lx4vG.wU_EiCok? Olam sp=sharnglecrolo-ACGWESVUYTaAgprinMade=tuo 1925anar022 NLP- Week 1-Projectipynb- Colboratory ‘Some rational for trying this out is + We average the embeddings in a sentence anyway so word or not word maybe doesnt matter. * Vocabalary of character trigrams is much smaller than word trigrams so our models are easier to train. 1 class HParamsCTT: 2 batch_size: int = 16 ### NOTE THE CHANGE 3. integer_input: bool = True ### NOTE THE CHANGE 4 num_classes: int = len(label_encoder.classes_) 5 learning_rate: float = 0.001 6 max_epochs: int = 4 7 ngrams: int = 3 8 embed_dim: int = 35@ # ADDED, Change it to your liking 9 num_tokens: int = 5ee@ 18 uu 12 class CharacterTrigramTokenizer: Bo” 14 We represent a sentence as a vector of num_tokens tokens. 15 If the trigram is present in the sentence then we add the token’s id to the sentence 6 17 def _init_(self, train_data, num_tokens): 18 self.num_tokens = num_tokens 19 self.token_to_id_map = self.get_char_trigram_token_map(train_data, num_tokens) 26 21 def ngrams(self, text, n): 22 n_grams = [] 23. for 4 in range(len(text)): n_grams.append(text[i: i + n].lower()) #we only work with 1 24 return n_grams 25 26 def get_frequency_map(self, input_data, num_tokens): 27 from collections import Counter 28 n_grams = [] 29 for 4 in range(1en(input_data)): 30 sentence = input_data[i][1] BL sentence_n_grans = self.ngrans(sentence, 3) 32 n_grams .extend(sentence_n_grams) 33. return Counter(n_grams).most_connon(num_tokens) 34 #return sorted(set(n_grams), key-n_grans.count, reverse=True)[ :num_tokens] 35 36 def get_char_trigram_token_map(self, train_data, num_tokens): 37 38 1. Compute a frequency map of the “num_tokens” most common trigrams in the training da 392. Add an extra key called "UNK" to capture unknown tokens. 4@ 3. Create unique integer ids for all these tokens 1...N aa 42 freq_map = self.get_frequency_map(train_data, num_tokens) #(1) hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 2012532022 NLP- Week 1 - Projectipynb - Colaboratory 43 token_to_id_map = {} 44 for i in range(len(freq_map)): 45 token_to_id_map[freq_map[i][®]] 46 token_to_id_map["UNK"] = ist 47 self.token_to_id_map = token_to_id_map 48 49 return token_to_id_map 50. 51 def vectorize(self, sentence): 52 53 Given a sentence (string), do the following - 54 1, Get all trigrams for the sentence 55 2. Get id for every trigram 56 3. For missing trigrams in the map mark them as "UNK" 574. Append all the ids into a list and that is your sentence vector 58 59 sentence_tokens = self.ngrams(sentence, 3) #trigram is hard-coded #(1) 60 #print(sentence_tokens) 61 sentence_vector = [] 62 for i in range(len(sentence_tokens)): #(2) 63 if sentence_tokens[i] in self.token_to_id_map.keys(): 64. sentence_vector.append(self. token_to_id_map[sentence_tokens[iJ]]) 65 else: #(3) 66 sentence_tokens[i] = "UNK’ 07 sentence_vector.append(self.token_to_id_map["UNK"]) 68 69 return sentence_vector, sentence_tokens #(4) 1 training _data[@][1] "I remenber going to the fireworks with my best friend. There was a lot of people_comma _ but it only felt like us in the world. I remember going to see the fireworks with my best friend. It was the first time we ever spent time alone together. Although there wa Let's just validate the output of the tokenizer before we train the model. We should see something like: Number of trigrams: 12997 UNK 1 tha 13 the a 5 ny 1 # Running this cell is mandatory 2 characterTokenizer = CharacterTrigramTokenizer(training_data, HParamsCTT.num_tokens) Bie hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 2832022 4 5 6 7 8 9 NLP- Week 1- Projectipynb - Colabortory for k, v in characterTokenizer.token_to_id_map.items(): if i> ie: break print (k,v) itea th the to ing ng to ny nd Now we can create the simple embedding layer based model and start training it. 1 2 3 4 5 6 7 8 9 18 ct 12 3B 14 15 16 7 18 19 28 21 22 23 24 class EmbeddingBagClassificationModel (torch.nn.Module) : def _init_(self, num_tokens, enbed_dim, n_classes): super().__init_() self.classes = n_classes self.embedding = torch.nn.EmbeddingBag(num_tokens, embed_dim, padding idx=0) self.hidden_layer = torch.nn.Linear(enbed_dim, 300) self.linear_layer = torch.nn.Linear(300, n_classes) def forward(self, batch): Pass the input batch through the enbedding layer and then follow it up with the Lin #print (batch) y = self.enbedding(batch) y = self.hidden_layer(y) y = self.1inear_layer(y) return y output3 = trainer(# Note the plus three: padding + UNK + 1 buffer model=EmbeddingBagClassificationModel (HParamsCTT.num_tokens + 3, HParamsCTT.embed_dim, HParamsCTT.num_classes), params=HParamsCTT, vectorizer=characterTokenizer) hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 22832022 1 2 3 4 5 6 7 8 9 10 ct NLP- Week 1 - Projectipynb - Colaboratory GPU available: False, used: False TPU available: False, using: @ TPU cores IPU available: False, using: ® IPUs | Name | Type | Parans @ | model | EmbeddingBagClassificationModel | 1.9 1 | accuracy | Accuracy le 1.9 Trainable parans ° Non-trainable parans 1.9 Total parans 7.464 Total estimated model params size (MB) Validation sanity check: 0% 072 [00:00<2, 21 Epoch 3: 100% 8187/5187 [02:39<00:00, 32.5éit/s, loss=1.48, v_num=67, train_loss=0.224, val_loss=2.140, val_accuracy=0.426) Validating: 100% 395/395 [00:03<00:00, 109.331 Validating: 100% 395/395 [00:02<00:00, 134.521 Validating: 96% 380/395 [00:02<00:00, 152.651 Validating: 100% 395/398 [00:02<00:00, 142.891 Testing: 100% 357/357 [00:03<00:00, 84.94i # A better way would be creating a function y_test = [] y_pred = [] for batch_results in output3: y_test extend(list(batch_results["labels"] .numpy())) y_pred.extend(list(batch_results["predictions"] .numpy())) cm = confusion_matrix(y_test, y_pred) cmp = ConfusionMatrixDisplay(cm, display_labels=label_encoder.. inverse_transform(list(set(y, fig, ax = plt.subplots(figsize=(15,15)) cmp .plot (ax=ax) plt.xticks (rotatio hitpseolab research google comidrivalWeqychRkze-2Lx4yGJwU_EiCokPalamT 7usp=sharing#scrollTo=ACGWEpVUY7nAgpriniMode=tue 25sitsr2022 NLP- Week 1 - Projectipynb - Colaboratory array([ ®, 1, 2, 3, 4 v7,
You might also like
Edx Machine Learning Course Outlines
PDF
100% (1)
Edx Machine Learning Course Outlines
4 pages
Techpet Undertaken Form
PDF
No ratings yet
Techpet Undertaken Form
2 pages
Abdulwasiu Tiamiyu CV
PDF
No ratings yet
Abdulwasiu Tiamiyu CV
2 pages
sss1 Exam Jide
PDF
No ratings yet
sss1 Exam Jide
3 pages
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
From Everand
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
Mark Manson
4/5 (6453)
Never Split the Difference: Negotiating As If Your Life Depended On It
From Everand
Never Split the Difference: Negotiating As If Your Life Depended On It
Chris Voss
4.5/5 (1005)
The Glass Castle: A Memoir
From Everand
The Glass Castle: A Memoir
Jeannette Walls
4.5/5 (1856)
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
From Everand
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Ben Horowitz
4.5/5 (361)
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
From Everand
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
Brené Brown
4/5 (1175)
Principles: Life and Work
From Everand
Principles: Life and Work
Ray Dalio
4/5 (643)
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
From Everand
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
Ashlee Vance
4.5/5 (582)
A Tree Grows in Brooklyn
From Everand
A Tree Grows in Brooklyn
Betty Smith
4.5/5 (2033)
Grit: The Power of Passion and Perseverance
From Everand
Grit: The Power of Passion and Perseverance
Angela Duckworth
4/5 (650)
Shoe Dog: A Memoir by the Creator of Nike
From Everand
Shoe Dog: A Memoir by the Creator of Nike
Phil Knight
4.5/5 (628)
The Emperor of All Maladies: A Biography of Cancer
From Everand
The Emperor of All Maladies: A Biography of Cancer
Siddhartha Mukherjee
4.5/5 (298)
The Perks of Being a Wallflower
From Everand
The Perks of Being a Wallflower
Stephen Chbosky
4.5/5 (4102)
Angela's Ashes: A Memoir
From Everand
Angela's Ashes: A Memoir
Frank McCourt
4.5/5 (943)
Steve Jobs
From Everand
Steve Jobs
Walter Isaacson
4.5/5 (1139)
Little Women
From Everand
Little Women
Louisa May Alcott
4.5/5 (2369)
Yes Please
From Everand
Yes Please
Amy Poehler
4/5 (2016)
Sing, Unburied, Sing: A Novel
From Everand
Sing, Unburied, Sing: A Novel
Jesmyn Ward
4/5 (1267)
The Outsider: A Novel
From Everand
The Outsider: A Novel
Stephen King
4/5 (2884)
The Yellow House: A Memoir (2019 National Book Award Winner)
From Everand
The Yellow House: A Memoir (2019 National Book Award Winner)
Sarah M. Broom
4/5 (100)
Team of Rivals: The Political Genius of Abraham Lincoln
From Everand
Team of Rivals: The Political Genius of Abraham Lincoln
Doris Kearns Goodwin
4.5/5 (244)
The Art of Racing in the Rain: A Novel
From Everand
The Art of Racing in the Rain: A Novel
Garth Stein
4/5 (4372)
The Constant Gardener: A Novel
From Everand
The Constant Gardener: A Novel
John le Carré
4/5 (278)
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
From Everand
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
Margot Lee Shetterly
4/5 (1022)
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
From Everand
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
Dave Eggers
3.5/5 (233)
Manhattan Beach: A Novel
From Everand
Manhattan Beach: A Novel
Jennifer Egan
3.5/5 (919)
Fear: Trump in the White House
From Everand
Fear: Trump in the White House
Bob Woodward
3.5/5 (836)
The World Is Flat 3.0: A Brief History of the Twenty-first Century
From Everand
The World Is Flat 3.0: A Brief History of the Twenty-first Century
Thomas L. Friedman
3.5/5 (2289)
Rise of ISIS: A Threat We Can't Ignore
From Everand
Rise of ISIS: A Threat We Can't Ignore
Jay Sekulow
3.5/5 (144)
The Unwinding: An Inner History of the New America
From Everand
The Unwinding: An Inner History of the New America
George Packer
4/5 (45)
John Adams
From Everand
John Adams
David McCullough
4.5/5 (2546)
Bad Feminist: Essays
From Everand
Bad Feminist: Essays
Roxane Gay
4/5 (1090)
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
From Everand
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
Gilbert King
4.5/5 (280)
On Fire: The (Burning) Case for a Green New Deal
From Everand
On Fire: The (Burning) Case for a Green New Deal
Naomi Klein
4/5 (78)