Python For Data Science Cheat Sheet Model Architecture Inspect Model
>>> model.output_shape
Sequential Model Model output shape
Keras >>> from [Link] import Sequential
>>>
>>>
[Link]()
model.get_config()
Model summary representation
Model configuration
Learn Python for data science Interactively at [Link] >>> model = Sequential() >>> model.get_weights() List all weight tensors in the model
>>> model2 = Sequential()
>>> model3 = Sequential() Compile Model
Multilayer Perceptron (MLP) MLP: Binary Classification
Keras Binary Classification >>> [Link](optimizer='adam',
loss='binary_crossentropy',
Keras is a powerful and easy-to-use deep learning library for >>> from [Link] import Dense metrics=['accuracy'])
Theano and TensorFlow that provides a high-level neural >>> [Link](Dense(12, MLP: Multi-Class Classification
input_dim=8, >>> [Link](optimizer='rmsprop',
networks API to develop and evaluate deep learning models. kernel_initializer='uniform', loss='categorical_crossentropy',
activation='relu')) metrics=['accuracy'])
A Basic Example >>> [Link](Dense(8,kernel_initializer='uniform',activation='relu'))
MLP: Regression
>>> [Link](Dense(1,kernel_initializer='uniform',activation='sigmoid')) >>> [Link](optimizer='rmsprop',
>>> import numpy as np loss='mse',
>>> from [Link] import Sequential Multi-Class Classification metrics=['mae'])
>>> from [Link] import Dense >>> from [Link] import Dropout
>>> data = [Link]((1000,100)) >>> [Link](Dense(512,activation='relu',input_shape=(784,))) Recurrent Neural Network
>>> labels = [Link](2,size=(1000,1)) >>> [Link](Dropout(0.2)) >>> [Link](loss='binary_crossentropy',
>>> model = Sequential() optimizer='adam',
>>> [Link](Dense(512,activation='relu')) metrics=['accuracy'])
>>> [Link](Dense(32, >>> [Link](Dropout(0.2))
activation='relu', >>> [Link](Dense(10,activation='softmax'))
>>>
input_dim=100))
[Link](Dense(1, activation='sigmoid'))
Regression Model Training
>>> [Link](optimizer='rmsprop', >>> [Link](Dense(64,activation='relu',input_dim=train_data.shape[1])) >>> [Link](x_train4,
loss='binary_crossentropy', >>> [Link](Dense(1)) y_train4,
metrics=['accuracy']) batch_size=32,
>>> [Link](data,labels,epochs=10,batch_size=32) Convolutional Neural Network (CNN) epochs=15,
verbose=1,
>>> predictions = [Link](data) >>> from [Link] import Activation,Conv2D,MaxPooling2D,Flatten validation_data=(x_test4,y_test4))
>>> [Link](Conv2D(32,(3,3),padding='same',input_shape=x_train.shape[1:]))
Data Also see NumPy, Pandas & Scikit-Learn >>>
>>>
[Link](Activation('relu'))
[Link](Conv2D(32,(3,3))) Evaluate Your Model's Performance
Your data needs to be stored as NumPy arrays or as a list of NumPy arrays. Ide- >>> [Link](Activation('relu')) >>> score = [Link](x_test,
>>> [Link](MaxPooling2D(pool_size=(2,2))) y_test,
ally, you split the data in training and test sets, for which you can also resort batch_size=32)
>>> [Link](Dropout(0.25))
to the train_test_split module of sklearn.cross_validation.
>>> [Link](Conv2D(64,(3,3), padding='same'))
Keras Data Sets >>>
>>>
[Link](Activation('relu'))
[Link](Conv2D(64,(3, 3)))
Prediction
>>> from [Link] import boston_housing, >>> [Link](Activation('relu')) >>> [Link](x_test4, batch_size=32)
mnist, >>> [Link](MaxPooling2D(pool_size=(2,2))) >>> model3.predict_classes(x_test4,batch_size=32)
cifar10, >>> [Link](Dropout(0.25))
imdb
>>> (x_train,y_train),(x_test,y_test) = mnist.load_data()
>>> (x_train2,y_train2),(x_test2,y_test2) = boston_housing.load_data()
>>>
>>>
[Link](Flatten())
[Link](Dense(512))
Save/ Reload Models
>>> (x_train3,y_train3),(x_test3,y_test3) = cifar10.load_data() >>> [Link](Activation('relu')) >>> from [Link] import load_model
>>> (x_train4,y_train4),(x_test4,y_test4) = imdb.load_data(num_words=20000) >>> [Link](Dropout(0.5)) >>> [Link]('model_file.h5')
>>> num_classes = 10 >>> my_model = load_model('my_model.h5')
>>> [Link](Dense(num_classes))
>>> [Link](Activation('softmax'))
Other
Recurrent Neural Network (RNN) Model Fine-tuning
>>> from [Link] import urlopen
>>> data = [Link](urlopen("[Link]
ml/machine-learning-databases/pima-indians-diabetes/
>>> from [Link] import Embedding,LSTM Optimization Parameters
[Link]"),delimiter=",") >>> [Link](Embedding(20000,128)) >>> from [Link] import RMSprop
>>> X = data[:,0:8] >>> [Link](LSTM(128,dropout=0.2,recurrent_dropout=0.2)) >>> opt = RMSprop(lr=0.0001, decay=1e-6)
>>> y = data [:,8] >>> [Link](Dense(1,activation='sigmoid')) >>> [Link](loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
Preprocessing Also see NumPy & Scikit-Learn
Early Stopping
Sequence Padding Train and Test Sets >>> from [Link] import EarlyStopping
>>> from [Link] import sequence >>> from sklearn.model_selection import train_test_split >>> early_stopping_monitor = EarlyStopping(patience=2)
>>> x_train4 = sequence.pad_sequences(x_train4,maxlen=80) >>> X_train5,X_test5,y_train5,y_test5 = train_test_split(X, >>> [Link](x_train4,
>>> x_test4 = sequence.pad_sequences(x_test4,maxlen=80) y,
test_size=0.33, y_train4,
random_state=42) batch_size=32,
One-Hot Encoding epochs=15,
>>> from [Link] import to_categorical Standardization/Normalization validation_data=(x_test4,y_test4),
>>> Y_train = to_categorical(y_train, num_classes) >>> from [Link] import StandardScaler callbacks=[early_stopping_monitor])
>>> Y_test = to_categorical(y_test, num_classes) >>> scaler = StandardScaler().fit(x_train2)
>>> Y_train3 = to_categorical(y_train3, num_classes) >>> standardized_X = [Link](x_train2) DataCamp
>>> Y_test3 = to_categorical(y_test3, num_classes) >>> standardized_X_test = [Link](x_test2) Learn Python for Data Science Interactively