Practice 20 Neural Networks multiple-choice questions designed for CDAC CCAT exam preparation. Click "Show Answer" to reveal the correct option with detailed explanation.
Show Answer & Explanation
Correct Answer: B — Biological neurons in the brain
Artificial neural networks are inspired by the structure and function of biological neural networks in the brain.
Show Answer & Explanation
Correct Answer: B — Single layer neural network with one neuron
A perceptron is the simplest neural network - a single neuron that makes binary classifications.
Show Answer & Explanation
Correct Answer: B — Non-linearity to the network
Activation functions introduce non-linearity, allowing networks to learn complex patterns.
Show Answer & Explanation
Correct Answer: B — f(x) = max(0, x)
ReLU (Rectified Linear Unit) returns x if positive, otherwise 0. Simple yet effective.
Show Answer & Explanation
Correct Answer: B — 0 and 1
Sigmoid squashes values to range (0, 1), useful for probability outputs.
Show Answer & Explanation
Correct Answer: B — Calculate gradients and update weights
Backpropagation calculates gradients of the loss with respect to weights, enabling weight updates.
Show Answer & Explanation
Correct Answer: B — Loss/error function
Gradient descent iteratively adjusts weights to minimize the loss function.
Show Answer & Explanation
Correct Answer: B — Controls step size in weight updates
Learning rate controls how much weights are adjusted in each update step during training.
Show Answer & Explanation
Correct Answer: B — One complete pass through entire training dataset
An epoch is one complete pass through the entire training dataset during training.
Show Answer & Explanation
Correct Answer: B — Neural networks with many layers
Deep learning uses neural networks with multiple hidden layers (deep networks) to learn hierarchical features.
Show Answer & Explanation
Correct Answer: B — Gradients become very small in deep networks
In deep networks, gradients can become extremely small during backpropagation, preventing weight updates in early layers.
Show Answer & Explanation
Correct Answer: B — Prevent overfitting by randomly dropping neurons
Dropout randomly deactivates neurons during training, preventing overfitting by reducing co-adaptation.
Show Answer & Explanation
Correct Answer: B — Normalizes layer inputs for faster training
Batch normalization normalizes layer inputs, accelerating training and reducing internal covariate shift.
Show Answer & Explanation
Correct Answer: B — Output layer for multi-class classification
Softmax converts raw outputs to probability distribution over multiple classes (summing to 1).
Show Answer & Explanation
Correct Answer: B — Difference between predicted and actual values
Loss function quantifies how far model predictions are from actual target values.
Show Answer & Explanation
Correct Answer: B — Momentum and adaptive learning rates
Adam combines momentum (from RMSprop) and adaptive learning rates for efficient optimization.
Show Answer & Explanation
Correct Answer: B — Randomly with small values
Weights are randomly initialized with small values to break symmetry and enable learning.
Show Answer & Explanation
Correct Answer: B — Are between input and output layers
Hidden layers are intermediate layers between input and output that learn internal representations.
Show Answer & Explanation
Correct Answer: B — Classification tasks
Cross-entropy loss measures difference between predicted probability distribution and actual class labels.
Show Answer & Explanation
Correct Answer: B — Using pre-trained model for new task
Transfer learning uses knowledge (weights) from a model trained on one task for a related task.