0% found this document useful (0 votes)
16 views3 pages

Exp 2

The document outlines a machine learning workflow for crop recommendation using TensorFlow and scikit-learn. It includes data loading, preprocessing, model definition, training, and evaluation, achieving a test accuracy of approximately 97%. The model predicts the recommended crop based on input features after training on a dataset.

Uploaded by

samadhankanade29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
16 views3 pages

Exp 2

The document outlines a machine learning workflow for crop recommendation using TensorFlow and scikit-learn. It includes data loading, preprocessing, model definition, training, and evaluation, achieving a test accuracy of approximately 97%. The model predicts the recommended crop based on input features after training on a dataset.

Uploaded by

samadhankanade29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 3
‘710312025, 03:24 sedhant28 xp import numpy as np import pandas as pd import tensorflow as tf fron tensorflow.keras.models import Sequential fron sklearn.preprocessing import Labeléncoder fron tensorflow.keras.layers import Dense from tensorflow.keras.optinizers import Adam fron tensorflow.keras.losses import MeanSquaredError, MeanAbsoluteError, SparseCate fron sklearn.preprocessing import StandardScaler fron sklearn.model_selection import train_test_split # Load dataset df = pd.read_csv("Crop_reconmendation.csv") # Encode categorical Labels label_encoder = LabelEncoder() df['label'] = label_encoder.Fit_transform(df{ ‘label’ }) * Split features and target df drop(columns=[ ‘label’ ]) df[ Label] # Normalize features scaler = StandardScaler() X_scaled = scaler.fit_transform(x) # Split dataset into training and testing sets X train, X_test, y train, y_test = train_test_split(x scaled, y, test_size-0.2, ran from tensorflow.keras.layers import Dense, Dropout # Define model fron tensorflow-keras.layers import Input, Dense, Dropout fron tensorflow.keras.models import Sequential model = Sequential ([ Input (shape=(X_train.shape[1],)), Dense(128, activation='relu'), Dropout(9.3),, Dense(64, activation="relu'), Dropout(@.3), Dense(1en(df[‘label'].unique()), activatior softmax’) D # Compile model model . conpile(optimizer-Adam(learning_rate=0.001),, ‘loss-SparseCategoricalCrossentropy(), metrics=[ ‘accuracy’ ]) # Train model history = model.fit(X_train, y train, validation data=(x_test, y_test), epoch localhost 8888/doctreelsiddhant28%42Cexp2 jpynb? 18 ‘710312025, 03:24 Epoch 1/28 110/116 uuracy: 0.9636 Epoch 2/28 110/116 uracy: 0.9682 Epoch 3/28 110/118 uracy: 0.9682 Epoch 4/20 110/116 uracy: 0.9682 Epoch 5/28 119/11 uracy: 0.9727 Epoch 6/28 110/116 uracy: 0.9659 Epoch 7/28 110/116 uracy: 0.9705 Epoch 8/2 110/116 uuracy: 0.9705 Epoch 9/28 110/116 uracy: 0.9727 Epoch 10/26 110/116 uracy: 0.9659 Epoch 11/20 110/116 uracy: 0.9727 Epoch 12/20 110/116 uracy: 0.9682 Epoch 13/20 110/116 uracy: 2.9795 Epoch 14/26 10/116 uracy: 0.9750 Epoch 15/20 110/116 uracy: 0.9705 Epoch 16/20 110/116 uracy: 0.9795 Epoch 17/20 10/116 uuracy: 0.9705 Epoch 18/26 10/116 uracy: 0.9727 Epoch 19/26 110/116 val_los: val_los: val_los: val_los: val_los: val_los: val_los' val_los: val_los: val_los: val_loss: val_los: val_los: val_los: val_los: val_los: val_los: val_los: localhost 8888/doctreelsiddhant28%42Cexp2 jpynb? @s 2ms/step 0.0791 @s ims/step 0.0892 @s ims/step 0.0728 Qs ims/step 0.0788 @s ims/step 8.0854 @s tms/step 0.0848 Qs ims/step 0.0754 Qs ims/step 0.0874 @s ims/step 0.0701 @s ims/step 0.0834 @s ims/step 0.0780 Qs Ims/step @.0711 @s ims/step 0.0698 Qs ims/step 0.0753 @s ims/step 0.0848 Qs ims/step 0.0648 @s ims/step 0.0684 @s ims/step 0.0758 @s ims/step sedhant28 xp accuracy: 0.9643 = accuracy: @.9676 > accuracy: 6.9847 » accuracy: 6.9714 = accuracy: 6.9766 0.9823 0.9768 8.9652 0.9804 0.9709 0.9770 1 0.9837 0.9698 + 0.9760 5 0.9724 + 0.9764 : 0.9762 + 0.9663, 1 0.9784 loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: loss: 0.0931, 0.0865, 0.0587 0.0729 0.0733, 0.0634 0.0701, 0.0770 0.0568, 0.0689 0.0701, 0.0542 0.0801, 0.0664 0.0765 0.0618 0.0637 0.0823 0.0663, val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace val_ace 268 ‘710312025, 03:24 sedhant28 xp uracy: 0.9727 - val_loss: 0.0745 Epoch 20/28 110/118 —————————. 0s 1ms/step - accuracy: 0.9847 - loss: 0.0478 - val_acc uracy: 0.9659 - val_loss: 0.0781 # Evaluate model loss, accuracy = model.evaluate(X_test, y_test) print(f"Test Accuracy: {accuracy:.2f}") 14/14 ———————— 0s 2ns/step - accuracy: 0.9689 - loss: @.0712 Test Accuracy: 0.97 # Predict on new sample (example input) def predict_crop(sample): sample_scaled = scaler.transform([sample]) prediction = model.predict(sanple_scaled) predicted class = np.argmax(prediction) return label_encoder.inverse_transform((predicted_class]) [2] # Example usage sample_input = X.iloc[@].values # Take first row as an example predicted_crop = predict_crop(sample_input) print(#"Reconmended Crop: {predicted_crop}") 1/1 —————— ss 47ms/ster Recommended Crop: rice C:\Users\student \anaconda3\Lib\site-packages\sklearn\base.py:493: Userwarning: X doe 5 not have valid feature names, but StandardScaler was fitted with feature names warnings.warn( localhost 8888/doctreelsiddhant28%42Cexp2 jpynb? 38.

You might also like