0% found this document useful (0 votes)
16 views2 pages

BTVN6 Code

The document outlines a Python script for building and training a neural network model to predict diabetes using the Pima Indians Diabetes dataset. It includes steps for data loading, preprocessing, model creation, training, evaluation, and plotting learning curves. The model is compiled with binary crossentropy loss and trained for 50 epochs, with test accuracy printed at the end.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views2 pages

BTVN6 Code

The document outlines a Python script for building and training a neural network model to predict diabetes using the Pima Indians Diabetes dataset. It includes steps for data loading, preprocessing, model creation, training, evaluation, and plotting learning curves. The model is compiled with binary crossentropy loss and trained for 50 epochs, with test accuracy printed at the end.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

#1.

import library
import pandas as pd
import numpy as np
import [Link] as plt
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
from [Link] import classification_report, confusion_matrix,
ConfusionMatrixDisplay
from tensorflow import keras

#[Link] and prepare dataset


url ="[Link]
[Link]"
columns =["Pregnancies" ,"Glucose" , "BloodPressure" , "SkinThickness" ,
"Insulin" , "BMI" ,
"DiabetesPedigreeFunction" , "Age" , "OutCome"]
df = pd.read_csv(url , names=columns)
print(df)

# Separate features and labels


X = [Link]("OutCome", axis=1).values
y = df["OutCome"].values

# [Link]/Valid/Test split (64% train, 16% valid, 20% test)


X_temp, X_test, y_temp, y_test =
train_test_split(X,y,test_size=0.2,random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_temp, y_temp,
test_size=0.2, random_state=42)

#4. Standardize the data


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = [Link](X_valid)
X_test = [Link](X_test)
[Link][1]

#[Link] the neural network model


model = [Link]([
[Link](32,activation="relu", input_shape=[[Link][1]]),
[Link](16,activation="relu"),
[Link](1,activation="sigmoid")

])

[Link]()
#6. Compile the model
[Link](loss="binary_crossentropy",
optimizer="adam",
metrics=["accuracy"])

#[Link] the model


history = [Link](X_train,y_train,epochs=50, validation_data=(X_valid,y_valid))
#[Link] model on test set
loss, accuracy = [Link] (X_test,y_test)
print(f"Test Accuracy: {accuracy:.4f}")

#[Link] accuracy and loss over epochs


def plot_learning_curves(history):
[Link]([Link]).plot(figsize=(10,5))
[Link](True)
[Link]().set_ylim(0,1)
[Link]("Epochs")
[Link]("Accuracy & Loss Over Epochs")
[Link]()

plot_learning_curves(history)

You might also like