Solution 1 :------
I'll solve this step by step by computing the perceptron output for each input pair and comparing it
to the expected AND gate behavior.
Given:
• Weights: w₁ = 0.3, w₂ = 0.3
• Bias: b = -0.4
• Activation function: output = 1 if weighted sum ≥ 0, otherwise 0
Perceptron formula: Weighted sum = w₁x₁ + w₂x₂ + b = 0.3x₁ + 0.3x₂ - 0.4
Let me compute each case:
Case 1: (x₁, x₂) = (0, 0)
• Weighted sum = 0.3(0) + 0.3(0) - 0.4 = -0.4
• Since -0.4 < 0 → Output = 0
• Expected AND output = 0
• ✓ Correct
Case 2: (x₁, x₂) = (0, 1)
• Weighted sum = 0.3(0) + 0.3(1) - 0.4 = 0.3 - 0.4 = -0.1
• Since -0.1 < 0 → Output = 0
• Expected AND output = 0
• ✓ Correct
Case 3: (x₁, x₂) = (1, 0)
• Weighted sum = 0.3(1) + 0.3(0) - 0.4 = 0.3 - 0.4 = -0.1
• Since -0.1 < 0 → Output = 0
• Expected AND output = 0
• ✓ Correct
Case 4: (x₁, x₂) = (1, 1)
• Weighted sum = 0.3(1) + 0.3(1) - 0.4 = 0.3 + 0.3 - 0.4 = 0.2
• Since 0.2 ≥ 0 → Output = 1
• Expected AND output = 1
• ✓ Correct
Conclusion: Yes, the perceptron correctly mimics the AND gate behavior for all input combinations.
The weights w₁ = 0.3, w₂ = 0.3, and bias b = -0.4 are properly configured to implement the logical
AND function.
The key insight is that the bias of -0.4 ensures that both inputs must be 1 (contributing 0.6 total) to
overcome the negative bias and produce a positive weighted sum.
Solution 2:--------
I'll perform a forward pass through this neural network step by step.
Given Network Architecture:
• Input layer: 2 neurons
• Hidden layer: 2 neurons (ReLU activation)
• Output layer: 1 neuron (linear activation)
Given Parameters:
• Input-to-hidden weights: w₁,₁=0.1, w₁,₂=0.2, w₂,₁=0.3, w₂,₂=0.4
• Hidden layer biases: b₁=0.1, b₂=0.1
• Hidden-to-output weights: w₃,₁=0.5, w₃,₂=0.6
• Output bias: b₃=0.2
• Input vector: x=[1, 2]
Step 1: Input Layer to Hidden Layer
For hidden neuron 1:
• z₁ = w₁,₁x₁ + w₂,₁x₂ + b₁
• z₁ = 0.1(1) + 0.3(2) + 0.1
• z₁ = 0.1 + 0.6 + 0.1 = 0.8
For hidden neuron 2:
• z₂ = w₁,₂x₁ + w₂,₂x₂ + b₂
• z₂ = 0.2(1) + 0.4(2) + 0.1
• z₂ = 0.2 + 0.8 + 0.1 = 1.1
Step 2: Apply ReLU Activation to Hidden Layer
ReLU(z) = max(0, z)
• h₁ = ReLU(z₁) = ReLU(0.8) = 0.8
• h₂ = ReLU(z₂) = ReLU(1.1) = 1.1
Step 3: Hidden Layer to Output Layer
For output neuron:
• z₃ = w₃,₁h₁ + w₃,₂h₂ + b₃
• z₃ = 0.5(0.8) + 0.6(1.1) + 0.2
• z₃ = 0.4 + 0.66 + 0.2 = 1.26
Step 4: Apply Linear Activation to Output
Since the output uses linear activation:
• y = z₃ = 1.26
Final Answer: The final output of the neural network is 1.26.
Summary of Forward Pass:
• Input: [1, 2]
• Hidden layer pre-activation: [0.8, 1.1]
• Hidden layer post-activation (ReLU): [0.8, 1.1]
• Output layer pre-activation: 1.26
• Final output (linear): 1.26
Solution 4:------
K-Means Clustering Analysis (k=3, 2 iterations)
Given Data Points
Point Coordinates
A1 (2,10)
A2 (2,6)
A3 (11,11)
A4 (6,9)
A5 (6,4)
A6 (1,2)
A7 (5,10)
A8 (4,9)
A9 (10,12)
A10 (7,5)
Point Coordinates
A11 (9,11)
A12 (4,6)
A13 (3,10)
A14 (3,8)
A15 (6,11)
Note: A2 appears twice in the original data - I'll use both instances as (2,6) and (4,6)
Step 1: Choose Initial Centroids (Random Selection)
Let's randomly select 3 initial centroids:
• C1 = A2 = (2,6)
• C2 = A9 = (10,12)
• C3 = A10 = (7,5)
ITERATION 1
Step 2: Calculate Euclidean Distances and Assign Clusters
For each point, calculate distance to each centroid using: d = √[(x₂-x₁)² + (y₂-y₁)²]
Point Coords Dist to C1(2,6) Dist to C2(10,12) Dist to C3(7,5) Closest Cluster
A1 (2,10) 4.00 8.25 7.07 C1 1
A2 (2,6) 0.00 10.77 5.10 C1 1
A3 (11,11) 10.05 1.41 7.21 C2 2
A4 (6,9) 5.00 5.00 4.12 C3 3
A5 (6,4) 4.47 10.77 1.41 C3 3
A6 (1,2) 4.12 14.87 6.71 C1 1
A7 (5,10) 4.24 5.39 5.39 C1 1
A8 (4,9) 3.61 7.62 4.47 C1 1
A9 (10,12) 10.77 0.00 7.62 C2 2
A10 (7,5) 5.10 7.62 0.00 C3 3
A11 (9,11) 8.06 1.41 6.32 C2 2
Point Coords Dist to C1(2,6) Dist to C2(10,12) Dist to C3(7,5) Closest Cluster
A12 (4,6) 2.00 8.49 3.16 C1 1
A13 (3,10) 4.12 7.28 5.83 C1 1
A14 (3,8) 2.24 8.54 4.47 C1 1
A15 (6,11) 5.66 4.12 6.08 C2 2
Step 3: Update Centroids (End of Iteration 1)
Cluster 1: A1(2,10), A2(2,6), A6(1,2), A7(5,10), A8(4,9), A12(4,6), A13(3,10), A14(3,8)
• New C1 = ((2+2+1+5+4+4+3+3)/8, (10+6+2+10+9+6+10+8)/8) = (3.0, 7.625)
Cluster 2: A3(11,11), A9(10,12), A11(9,11), A15(6,11)
• New C2 = ((11+10+9+6)/4, (11+12+11+11)/4) = (9.0, 11.25)
Cluster 3: A4(6,9), A5(6,4), A10(7,5)
• New C3 = ((6+6+7)/3, (9+4+5)/3) = (6.33, 6.0)
ITERATION 2
Step 4: Recalculate Distances with New Centroids
Point Coords Dist to C1(3.0,7.625) Dist to C2(9.0,11.25) Dist to C3(6.33,6.0) Closest Cluster
A1 (2,10) 2.45 7.18 5.72 C1 1
A2 (2,6) 1.73 9.49 4.33 C1 1
A3 (11,11) 8.06 2.06 6.40 C2 2
A4 (6,9) 3.54 3.81 3.06 C3 3
A5 (6,4) 4.61 8.50 2.01 C3 3
A6 (1,2) 5.66 12.79 6.25 C1 1
A7 (5,10) 2.83 4.61 4.12 C1 1
A8 (4,9) 1.56 5.59 3.61 C1 1
A9 (10,12) 7.28 1.25 6.71 C2 2
A10 (7,5) 4.33 6.71 1.15 C3 3
A11 (9,11) 6.32 0.25 5.39 C2 2
A12 (4,6) 1.73 7.35 2.33 C1 1
Point Coords Dist to C1(3.0,7.625) Dist to C2(9.0,11.25) Dist to C3(6.33,6.0) Closest Cluster
A13 (3,10) 2.38 6.02 5.00 C1 1
A14 (3,8) 0.62 6.52 3.54 C1 1
A15 (6,11) 4.24 3.04 5.10 C2 2
Step 5: Final Cluster Assignments (After Iteration 2)
Cluster 1 (Red): A1, A2, A6, A7, A8, A12, A13, A14
• Points: (2,10), (2,6), (1,2), (5,10), (4,9), (4,6), (3,10), (3,8)
Cluster 2 (Blue): A3, A9, A11, A15
• Points: (11,11), (10,12), (9,11), (6,11)
Cluster 3 (Green): A4, A5, A10
• Points: (6,9), (6,4), (7,5)
Final Centroids (After Iteration 2)
• C1 = (3.0, 7.625)
• C2 = (9.0, 11.25)
• C3 = (6.33, 6.0)
Summary
After 2 iterations of K-Means clustering with k=3:
1. Cluster 1 contains 8 points mainly in the left side of the coordinate space
2. Cluster 2 contains 4 points in the upper-right region
3. Cluster 3 contains 3 points in the middle-right region
The algorithm has converged to a stable partitioning where each cluster represents a distinct
spatial region of the data points.
Solution 5:-
I'll perform one step of forward propagation through this neural network with sigmoid activation.
Given:
• Input: X₁ = 1, X₂ = -1
• Hidden neuron: W₁ = 0.3, W₂ = -0.2, bias = 0.1
• Output neuron: W₃ = 0.4, bias = -0.1
• Target output: Y = 0.5
• Activation function: Sigmoid σ(z) = 1/(1 + e⁻ᶻ)
Step 1: Input to Hidden Layer
Calculate the weighted sum for the hidden neuron:
• z₁ = W₁X₁ + W₂X₂ + bias
• z₁ = 0.3(1) + (-0.2)(-1) + 0.1
• z₁ = 0.3 + 0.2 + 0.1 = 0.6
Apply sigmoid activation:
• h₁ = σ(z₁) = σ(0.6) = 1/(1 + e⁻⁰·⁶)
• h₁ = 1/(1 + 0.5488) = 1/1.5488 = 0.6457
Step 2: Hidden Layer to Output Layer
Calculate the weighted sum for the output neuron:
• z₂ = W₃h₁ + bias
• z₂ = 0.4(0.6457) + (-0.1)
• z₂ = 0.2583 - 0.1 = 0.1583
Apply sigmoid activation:
• ŷ = σ(z₂) = σ(0.1583) = 1/(1 + e⁻⁰·¹⁵⁸³)
• ŷ = 1/(1 + 0.8537) = 1/1.8537 = 0.5395
Final Results:
• Network output: ŷ = 0.5395
• Target output: Y = 0.5
• Error: |0.5395 - 0.5| = 0.0395
Summary of Forward Propagation:
• Input: [1, -1]
• Hidden layer pre-activation: z₁ = 0.6
• Hidden layer post-activation: h₁ = 0.6457
• Output layer pre-activation: z₂ = 0.1583
• Final output: ŷ = 0.5395
The network produces an output of 0.5395, which is very close to the target value of 0.5.
Solution 6:-
Simple Linear Regression Calculation
Given Data
House Size (sq. ft.) Price ($1000s)
1400 245
1600 312
1700 279
1875 308
1100 199
Step 1: Calculate Basic Statistics
Sample size: n = 5
Calculate means:
• X̄ = (1400 + 1600 + 1700 + 1875 + 1100) ÷ 5 = 7675 ÷ 5 = 1535
• Ȳ = (245 + 312 + 279 + 308 + 199) ÷ 5 = 1343 ÷ 5 = 268.6
Step 2: Calculate Required Sums
X Y (X - X̄ ) (Y - Ȳ) (X - X̄ )(Y - Ȳ) (X - X̄ )²
1400 245 -135 -23.6 3,186 18,225
1600 312 65 43.4 2,821 4,225
1700 279 165 10.4 1,716 27,225
1875 308 340 39.4 13,396 115,600
1100 199 -435 -69.6 30,276 189,225
Sums:
• Σ(X - X̄ )(Y - Ȳ) = 3,186 + 2,821 + 1,716 + 13,396 + 30,276 = 51,395
• Σ(X - X̄ )² = 18,225 + 4,225 + 27,225 + 115,600 + 189,225 = 354,500
Step 3: Calculate Regression Coefficients
Slope (β₁): β₁ = Σ(X - X̄ )(Y - Ȳ) / Σ(X - X̄ )² β₁ = 51,395 / 354,500 = 0.145
Intercept (β₀): β₀ = Ȳ - β₁X̄ β₀ = 268.6 - (0.145 × 1535) β₀ = 268.6 - 222.575 = 46.025
Final Results
Regression Equation: Y = 46.025 + 0.145X
• β₀ (intercept) = 46.025 (in $1000s, so $46,025)
• β₁ (slope) = 0.145 (price increases by $145 per additional sq. ft.)
Interpretation
• The base price of a house (when size = 0) would be approximately $46,025
• For every additional square foot, the house price increases by approximately $145
• A 1500 sq. ft. house would be predicted to cost: 46.025 + 0.145(1500) = $263,525
Solution 7:-
Sales Regression Analysis
Given Data
Year (x) Sales (y)
2005 12
2006 19
2007 29
2008 37
2009 45
Step 1: Transform the Data
To simplify calculations, let's transform the years by subtracting 2005:
• x = 0 for 2005, x = 1 for 2006, x = 2 for 2007, x = 3 for 2008, x = 4 for 2009
xy
0 12
1 19
2 29
3 37
4 45
Step 2: Calculate Basic Statistics
Sample size: n = 5
Calculate means:
• x̄ = (0 + 1 + 2 + 3 + 4) ÷ 5 = 10 ÷ 5 = 2
• ȳ = (12 + 19 + 29 + 37 + 45) ÷ 5 = 142 ÷ 5 = 28.4
Step 3: Calculate Required Sums
x y (x - x̄) (y - ȳ) (x - x̄)(y - ȳ) (x - x̄)²
0 12 -2 -16.4 32.8 4
1 19 -1 -9.4 9.4 1
2 29 0 0.6 0 0
3 37 1 8.6 8.6 1
4 45 2 16.6 33.2 4
Sums:
• Σ(x - x̄)(y - ȳ) = 32.8 + 9.4 + 0 + 8.6 + 33.2 = 84
• Σ(x - x̄)² = 4 + 1 + 0 + 1 + 4 = 10
Step 4: Calculate Regression Coefficients
Slope (b): b = Σ(x - x̄)(y - ȳ) / Σ(x - x̄)² b = 84 / 10 = 8.4
Intercept (a): a = ȳ - b·x̄ a = 28.4 - (8.4 × 2) a = 28.4 - 16.8 = 11.6
Step 5: Regression Equation (Transformed Scale)
y = 11.6 + 8.4x (where x = 0 corresponds to 2005)
Step 6: Convert Back to Original Scale
Since x = Year - 2005, we have: y = 11.6 + 8.4(Year - 2005) y = 11.6 + 8.4·Year - 8.4·2005 y = 11.6 +
8.4·Year - 16,842 y = -16,830.4 + 8.4·Year
Final Answer
A. Least Squares Regression Line: y = -16,830.4 + 8.4x
Where:
• y = sales (in units)
• x = year
• The slope (8.4) means sales increase by 8.4 units per year
• Sales are growing at a rate of 8.4 units annually
Verification
Let's verify with our data points:
• 2005: y = -16,830.4 + 8.4(2005) = -16,830.4 + 16,842 = 11.6 ≈ 12 ✓
• 2006: y = -16,830.4 + 8.4(2006) = -16,830.4 + 16,850.4 = 20 ≈ 19 ✓
• 2007: y = -16,830.4 + 8.4(2007) = -16,830.4 + 16,858.8 = 28.4 ≈ 29 ✓
• 2008: y = -16,830.4 + 8.4(2008) = -16,830.4 + 16,867.2 = 36.8 ≈ 37 ✓
• 2009: y = -16,830.4 + 8.4(2009) = -16,830.4 + 16,875.6 = 45.2 ≈ 45 ✓
Sales Estimates for Future Years
2010: y = -16,830.4 + 8.4(2010) = -16,830.4 + 16,884 = 53.6 units
2011: y = -16,830.4 + 8.4(2011) = -16,830.4 + 16,892.4 = 62.0 units
2012: y = -16,830.4 + 8.4(2012) = -16,830.4 + 16,900.8 = 70.4 units
I'll solve this KNN classification problem step by step.For the KNN classification of point P(5,3) with
K=3:
Step 1: Calculate Euclidean distances from P(5,3) to all points:
• d(P,A) = √[(5-2)² + (3-4)²] = √10 ≈ 3.16
• d(P,B) = √[(5-4)² + (3-2)²] = √2 ≈ 1.41
• d(P,C) = √[(5-4)² + (3-4)²] = √2 ≈ 1.41
• d(P,D) = √[(5-6)² + (3-2)²] = √2 ≈ 1.41
• d(P,E) = √[(5-6)² + (3-4)²] = √2 ≈ 1.41
• d(P,F) = √[(5-6)² + (3-6)²] = √10 ≈ 3.16
Step 2: Identify the 3 nearest neighbors: Four points (B, C, D, E) are tied with the minimum distance
of 1.41. Since we need K=3 neighbors, we select any 3 of these 4 points. Let's choose B, C, and D.
Step 3: Apply majority voting:
• B(4,2): Red class
• C(4,4): Blue class
• D(6,2): Blue class
Result: Point P(5,3) is classified as Blue since 2 out of 3 nearest neighbors belong to the Blue class.
The visualization above shows the dataset with the new point P and lines connecting it to its 3
nearest neighbors used for classification.