Finite Automata Answers
1. Define Finite Automata. Explain the basic components with an example.
1. Finite Automaton is a machine used to recognize patterns in input strings.
2. It has a finite number of states and follows rules to change states.
3. Q finite set of state
Σ finite set of input symbol
q0 initial state
F final state / end state
δ transition function
4. Example: For string "ab", use states q0→q1→q2 for a and b.
5. If it ends in the final state, the input is accepted.
2. Classify the different types of Finite Automata.
Type Full Form Definition (Simple) Example
For each input symbol, only one
Deterministic Finite Accepts strings ending with
DFA possible next state exists from any
Automaton "01"
given state.
Can move to multiple states for the
Non-deterministic Accepts strings containing
NFA same input symbol from a given
Finite Automaton "ab" or "ba"
state.
ε-NFA Epsilon-NFA (with ε- Can make transitions without any Moves from state q0 to q1
(E-NFA) transitions) input (ε moves). without consuming input
4. All three accept regular languages only.
5. DFAs are easier to implement in hardware/software due to determinism.
3. What is a Deterministic Finite Automaton (DFA)? Explain its formal definition.
1. DFA is a finite automaton where one input leads to one state.
2. It has a single unique transition for each symbol from each state.
3. Formally, DFA = (Q, Σ, δ, q₀, F).
4. Q = states, Σ = input, δ = transition function, q₀ = start, F = final.
5. It always knows what state to go to for a given input.
4. Discuss the applications of Finite Automata.
1. Used in designing lexical analysers in compilers.
2. Helps in pattern matching like finding words in text.
3. Used in text processing tools like grep.
4. Useful in network protocols for control and flow.
5. Helps to model digital circuits and software design.
5. What is a Nondeterministic Finite Automaton (NFA)? Explain its formal definition.
1. NFA can go to many states for one input symbol.
2. It allows multiple transitions from a state.
3. Formally, NFA = (Q, Σ, δ, q₀, F, ε).
4. δ: Q × Σ → 2^Q
5. Transition function δ returns a subset of states (i.e., a set of possible next states).
6. If any path leads to a final state, input is accepted.
7. Difference between DFA and NFA
or
DFA (Deterministic Finite Automaton) NFA (Nondeterministic Finite Automaton)
1. Can have multiple paths for an input
1. Only one path for each input symbol
symbol
2. Does not allow ε (epsilon) moves 2. Allows ε (epsilon) moves
3. Easy to implement in machines 3. Easier to design logically
4. Usually needs more states 4. Often needs fewer states
5. Both accept the same regular
5. Both accept the same regular languages
languages
7. Design FA with Σ={0,1} accepts only input 101
1. Q = {q0, q1, q2, q3}; q0 is start, q3 is final.
2. Accepts only "101".
3. Ends at q3 if input is "101".
8. Construct DFA with Σ={0,1} accepts all strings starting with 1
1. Q = {q0, q1, q2}; q0 is start, q1 is accept.
2. Only strings starting with 1 go to q1 and get accepted.
9. Construct transition table from the image
10. Describe the role of ϵ-transitions in finite automata. Construct a small NFA-ϵ and
demonstrate how to eliminate ϵ-transitions to convert it into a standard NFA.
1. ε-transitions let the automaton change states without input.
2. Helps in building flexible NFAs.
3. To remove ε, compute ε-closure for each state.
4. Convert ε-NFA to NFA by updating transitions.
5. Easier to construct NFA using ε-moves, then convert.
11. Draw transition diagram for the transition table
12. Convert the given NFA to DFA
13. Construct a DFA that accepts all binary strings which contain an even number of 0s and
an odd number of 1s.
14. Design an NFA in which all the string contain a substring 1110.
1. Use states for tracking “1 → 11 → 111 → 1110”.
2. Input should pass through 4 transitions.
15. Design FA with Σ={0,1} accepts the set of all strings with three consecutive 0's.
16. Explain the applications of Regular expression.
1. Used in search tools like grep, sed.
2. Applied in lexical analysis in compilers.
3. Helps in string validation (like email).
4. Useful in pattern matching tasks.
5. Used in text editors for find/replace.
17 Design a FA from given regular expression 10+(0+11)0∗1
18. Construct a regular expression over the alphabet {0, 1} for the language consisting of
all strings that start with 1 and end with 0.
19. Design a regular expression for the set of strings over the alphabet {a, b} such that no
two b's are adjacent.
20. Convert the given DFA to RE.
Regular Expression = (ab + b(a+bb))*
A → B → A via a then b ⇒ gives loop ab.
A → C → A via b then a ⇒ gives loop ba.
1.
A → C → B → A via b, b, b ⇒ gives loop bbb.
2.
3.
4. Combine ba and bbb as b(a + bb).
5. Final regular expression = loop of both paths: (ab + b(a + bb))*.
21 Construct a regular expression for the set of strings that start and end with the same
symbol over the alphabet {a, b}.
22. A language L is defined over {a, b} such that no string in L contains "bab" as a
substring. Construct a DFA that accepts L.
23. Define Regular grammar and explain the terminology notations.
24. Convert regular grammar to finite automata: S→0A/1S, A→1B/0S, B→1
.
25. Provide the corresponding regular expression after the conversion, if any simplification
or changes occur during the DFA construction.
1. A regular expression can be converted to a DFA using intermediate steps (like NFA
and subset construction).
2. During DFA construction, the structure of the regex changes into states and
transitions but the language stays the same.
3. When converting the DFA back into a regex (using state elimination), the resulting
regex may look different.
4. The new regex is equivalent to the original—it matches the same set of strings, even
if it looks more complex or simplified.
5. Simplification may occur, but the final regular expression represents the same
language accepted by the DFA.
26. Convert the NFA to a DFA for the language where strings contain exactly two a's over
the alphabet {a, b}.
1. b’s don’t change state.
2. Accepting state after 2 a’s.
3. Any more a’s goes to trap.
27. Convert regular grammar to finite automata S→aA/bB, A→aA/bB/ϵ, B→bB/aB/ϵ.
28. State and prove Pumping lemma for regular languages.
If L is regular, ∃ a number p such that any string s ∈ L with |s| ≥ p can be split as s =
1. Statement:
xyz.
For all i ≥ 0, xyⁱz ∈ L
2. Conditions:|y| > 0 , |xy| ≤ p
3. Purpose: Used to prove a language is not regular by contradiction.
4. Proof Idea: In a DFA with p states, long strings must repeat states → creates a loop (y
part).
5. Conclusion: If no such split exists for some string, L is not regular.
29. Describe the applications of pumping lemma.
1. Prove language is not regular.
2. Check limits of regular expressions.
3. Used in theoretical proofs.
4. Helps in classifying languages.
5. Used in compiler theory.
30. Explain any two applications of pumping lemma.
1. To Prove a Language is Not Regular
Most common use.
Assume the language is regular, apply the pumping lemma, and show a
contradiction.
2. To Distinguish Between Regular and Non-Regular Languages
Helps in classification.
If a language fails the lemma, it's non-regular; if it passes, it might be regular (but not
guaranteed).
31. Explain Context Free Grammar (CFG)?
1. CFG is used to define rules for making sentences in a language (like programming
languages).
2. It has 4 parts: variables, terminals, rules, and a start symbol.
3. Rules change one symbol into others, like S → aSb.
4. It is called "context-free" because rules work without checking surroundings.
5. Example: With S → aSb | ε, we can get words like ab, aabb, aaabbb.
32. Define derivation tree and explain the types of derivation tree?
1. Derivation tree shows how a string is generated using grammar rules, step by step.
2. The root is the start symbol, and leaves are the final string (terminals).
3. There are two types of derivations: leftmost and rightmost.
4. In leftmost derivation, we replace the leftmost non-terminal first.
5. In rightmost derivation, we replace the rightmost non-terminal first.
33. Explain parse tree and properties of parse tree.
1. A parse tree (also called derivation tree) shows how a string is made from grammar
rules.
2. The root node is the start symbol, and leaf nodes are terminal symbols (actual input).
3. Each internal node represents a non-terminal, and branches show production rules
used.
4. The tree shows the structure of the input based on the grammar.
5. Properties:
a. Root = Start symbol
b. Leaves = Input string
c. Left-to-right reading of leaves gives the final string
34. Explain how reduction is used to prove that a problem is undecidable. Give a general
overview.
1. Reduction means changing one problem into another in a way that solving one helps
solve the other.
2. If we can reduce a known undecidable problem to a new problem, the new one is
also undecidable.
3. This is used to prove that a new problem is undecidable by linking it to a hard
problem.
4. The logic is: “If we could solve the new problem, we could solve the old one too”,
which is not possible.
5. Example: Halting Problem is undecidable → reduce it to another problem to show
that problem is also undecidable
35. Explain Ambiguous Grammar with example?
1. A grammar is ambiguous if a string has more than one parse tree or derivation.
2. This means the grammar gives multiple meanings or structures for the same input.
3. Ambiguity is not good for compilers because it causes confusion in understanding
code.
4. To fix it, we need to rewrite the grammar to make it unambiguous.
5. Example: For grammar
E → E + E | E * E | id
The string id + id * id has two parse trees (depending on order of operations).
36. Difference between NFA and PDA
NFA (Nondeterministic Finite Automata) PDA (Pushdown Automata)
No memory (only states) Uses stack as memory
Accepts Regular Languages Accepts Context-Free Languages
Does not use a stack Uses a stack
Less powerful More powerful than NFA
Example: accepts strings like a*b* Example: accepts balanced parentheses
37. What is the transition function w.r.t CFG? Explain
1. In CFG, the transition function shows how to replace one symbol with others using
production rules.
2. It maps a non-terminal to a string of terminals and/or non-terminals.
3. It helps in moving from one step of the derivation to the next.
4. Example: If the rule is A → aB, the transition function moves A to aB.
5. This function guides the process of generating strings in the language from the start
symbol.
38. Illustrate the equivalence of PDA and CFG?
1. Both PDA (Pushdown Automata) and CFG (Context-Free Grammar) describe
Context-Free Languages.
2. For every CFG, there is a PDA that accepts the same language generated by that CFG.
3. For every PDA, there is a CFG that generates the same language accepted by that
PDA.
4. This shows PDAs and CFGs are equally powerful in expressing context-free
languages.
5. Example: Balanced parentheses can be described by both a CFG and a PDA.
39. Explain the steps to convert CFL TO PDA?
1. Start with a CFG that generates the CFL you want to convert.
2. Create a PDA with one state and use the stack to simulate the CFG derivation.
3. The PDA pushes the start symbol of the CFG onto the stack at the beginning.
4. For each production rule, the PDA replaces the top stack symbol with the right side
of the rule.
5. The PDA reads input symbols and pops matching terminals from the stack, accepting
when the stack is empty.
40. Define the sentential form and explain its types.
1. A sentential form is any string made of terminals and/or non-terminals derived from
the start symbol in CFG.
2. It shows the intermediate steps while generating a sentence in the language.
3. Types of sentential forms:
Leftmost sentential form: derived by always replacing the leftmost non-terminal
first.
Rightmost sentential form: derived by always replacing the rightmost non-terminal
first.
4. Sentential forms help show different ways to derive the same string.
5. Example: For S → aSb | ε, aSb is a sentential form between S and ab.
41. Explain any two applications of pumping lemma.
. To Prove a Language is Not Regular
Most common use.
Assume the language is regular, apply the pumping lemma, and show a
contradiction.
2. To Distinguish Between Regular and Non-Regular Languages
Helps in classification.
If a language fails the lemma, it's non-regular; if it passes, it might be regular (but not
guaranteed).
42. Explain multiple tracks Turing machine, Or Give the significance of multiple tracks.
1. A multiple tracks Turing machine has a tape divided into several parallel tracks on
the same tape cell.
2. Each track can hold a different symbol, so the machine reads and writes multiple
symbols at once.
3. It makes the machine more powerful and efficient by storing more information in
one tape cell.
4. The transition function reads and writes a tuple of symbols, one from each track.
5. It helps simplify complex computations by handling several data streams
simultaneously.
43. Discuss about PDA or Explain the Deterministic push down automata.
PDA:
1. PDA is an automaton that uses a stack to store extra information while reading input.
2. It can have multiple possible moves (nondeterministic) for the same input and stack
symbol.
DPDA:
3. DPDA has at most one move for each state, input, and stack symbol combination.
4. It works without guessing, making it deterministic in processing input.
5. DPDA accepts only deterministic context-free languages, a smaller set than general
PDA.
44. Explain PUSH DOWN AUTOMATA and CFL (Context Free Languages)
1. Pushdown Automata (PDA) is a machine like a finite automaton but with a stack to
store extra information.
2. PDA reads input symbols and uses the stack to keep track of things like nested
structures.
3. Context-Free Languages (CFL) are languages that can be generated by Context-Free
Grammars (CFG).
4. PDAs can accept all CFLs, meaning every CFL has a PDA that recognizes it.
5. Example: Balanced parentheses language is a CFL and can be recognized by a PDA
using the stack.
45. Explain Top down Parsing and Bottom up Parsing.
1. Top-down: starts from start symbol to input.
2. Bottom-up: starts from input to start symbol.
3. Top-down predicts rules; bottom-up reduces strings.
4. Top-down uses recursion; bottom-up uses shift-reduce.
5. Bottom-up handles more grammars than top-down.
46 Write short notes on Decision algorithm.
1. An algorithm that gives yes/no answer.
2. Used to check properties of languages.
3. Example: Is the string accepted by DFA?
4. The algorithm always halts after a finite number of steps with a clear answer.
5. Decision algorithms help in solving problems like checking if a string belongs to a
language.
47. Discuss Transition Function and give their properties?
1. The transition function defines how a machine moves from one state to another
based on input and current conditions.
2. For a finite automaton, it maps a state and input symbol to the next state.
3. For a pushdown automaton, it depends on the current state, input symbol, and top
stack symbol to decide the next state and stack operation.
4. Properties include determinism (one next move) or nondeterminism (multiple
possible moves).
For DFA: δ(q, a) = q' (unique).
For NFA: δ(q, a) = set of states.
5. The transition function ensures the machine processes input step-by-step according to
its rules.
48. Convert the following CFG into Chomsky Normal Form (CNF): S → AB | a,
A→BC | b, B→a, C→c.
Given:
S → AB | a, A → BC | b, B → a, C → c
Steps:
1. All productions must be in A → BC or A → a form.
2. Already in CNF: all rules follow format.
3. S → AB | a (OK), A → BC | b (OK)
4. B → a (OK), C → c (OK)
5. So, it's already in Chomsky Normal Form.
49. Illustrate the applications of automata theory?
1. Used in compiler design and lexical analysis.
2. Helps in network protocol design.
3. Used in text pattern searching.
4. Used in software model checking.
5. Basis for designing digital circuits.
1. 50. Explain the Operations on strings or Formal Languages or Finite Automata or
Automata Theory?
1. Union: Combine two languages.
2. Concatenation: Join strings from two languages.
3. Kleene Star: Repeat strings 0 or more times.
4. Complement: Accept strings not in the language.
5. Intersection: Strings common to both languages.
51. Explain Pumping Lemma for CFG.
1. The Pumping Lemma is a tool to prove some languages are NOT context-free.
2. It says: For every context-free language, there is a number p (called pumping length).
3. Any string s in the language with length ≥ p can be split into 5 parts:
s=uvxyz
4. We can repeat the parts v and y any number of times (uvⁱx yⁱz), and the string stays in
the language.
5. Also, v and y cannot both be empty, and the length of v x y is at most p
52. Draw the diagram of the model of turning machine.
1. Turing machine diagram has states, tape, head.
2. States are like DFA states (q0, q1, etc.).
3. Tape is infinite and stores input symbols.
4. Head reads and writes symbols, moves L or R.
53. Explain the function of the turning machine.
1. It reads input from a tape using a head.
2. Based on current symbol and state, it writes new symbol.
3. Then it moves tape head left or right.
4. Changes to a new state as per transition function.
5. It halts when there are no more moves (accept/reject).
54. Explain general form of the turning machine.
1. A Turing Machine is a 7-tuple: (Q, Σ, Γ, δ, q₀, B, F).
2. Q = set of states, Σ = input alphabet.
3. Γ = tape alphabet, B = blank symbol.
4. δ = transition function, q₀ = start state.
5. F = set of final (accepting) states.
55. Illustrate the techniques of tuning machine construction?
1. Define the input alphabet and tape alphabet.
2. Create states and transitions for required logic.
3. Add tape operations: read, write, move.
4. Use loops, shifts, and halt conditions.
5. Make sure machine reaches final state for valid input.
56. Construct the TM to compute the concatenation function?
1. A TM for concatenation copies two strings on tape.
2. Input: w#x (use # as separator).
3. Move head to #, shift x to left after w.
4. Erase # and adjust tape symbols.
5. Output: string becomes wx (concatenated).
57. Design a PDA to accept palindromes over {a, b}.
1. Use a stack to store first half of string.
2. Push input symbols onto stack.
3. On midpoint, start popping and matching.
4. For even or odd length, use different states.
5. Accept if input matches reversed stack.
58. What is Primitive recursive function?
1. A primitive recursive function is a total function that always gives an output in finite
time.
2. It is built using basic functions like zero function, successor function, and projection
functions.
3. New functions are made using composition and primitive recursion rules.
4. Common functions like addition, multiplication, and factorial are primitive recursive.
5. They are simple and always terminate, but cannot define all computable functions.
59. List the properties of recursive and recursively enumerable languages?
Recursive Languages (Decidable)
1. Every input halts with a clear yes/no answer.
2. Closed under union, intersection, and complementation.
3. Can be decided by a Turing Machine that always halts.
Recursively Enumerable Languages (RE)
4. A Turing Machine accepts strings in the language but may loop forever for others.
5. Closed under union and intersection, but not complementation
60. Explain in detail about Turing Machine.
1. Turing Machine is a powerful model of computation.
2. It uses a tape, head, and states to process input.
3. It can read/write/move left or right based on rules.
4. It accepts, rejects, or loops depending on logic.
5. It can compute anything a real computer can.
61. Explain Turing machine as a computer of integer functions.
1. Turing machines can compute functions like add, subtract, etc.
2. Integer inputs are encoded on the tape.
3. The machine manipulates these using rules and states.
4. It can simulate any algorithm step-by-step.
5. It models how real computers calculate with numbers.
62. Define Chomsky Normal Form (CNF)? Convert a CFG to CNF.
1. In CNF, every production is either A → BC or A → a.
2. No ε-productions or unit productions (except S → ε).
3. Convert CFG by removing ε, unit, and useless rules.
4. Break longer RHS strings into two-symbol rules.
5. CNF is useful in parsing and proofs.
63. Construct a CFG for balanced parentheses.
1. Let S be the start symbol.
2. Rules: S → SS | (S) | ε.
3. This creates strings like (), (()), ()(), etc.
4. Grammar ensures every ( has a matching ).
5. Generates all correctly balanced parentheses.
64. Give the CFG for the language of palindromes over {a, b}.
The CFG that generates all palindromes over {a, b} is:
1. Grammar Rules: S → aSa | bSb | a | b | ε
2. Explanation:
o aSa → adds a on both sides.
o bSb → adds b on both sides.
o a, b → single letter palindromes.
o ε → empty string (also a palindrome).
3. Example: Generate "abba"
Let's derive abba using the grammar:
S
→ aSa
→ abSba
→ abba
65. Define Greibach Normal Form and explain its significance.
1. In GNF, rules are A → aα, where a is terminal, α is non-terminals.
2. No ε-productions (except possibly S → ε).
3. Useful for building top-down parsers.
4. Guarantees a terminal at each derivation step.
5. Helps in real-time language parsing.
66. Define ϵ-closure in the context of NFA - ε
1. ε-closure of a state = set of states reachable by ε-moves.
2. Includes the state itself.
3. Helps convert ε-NFA to NFA or DFA.
4. Used in subset construction method.
5. Needed to compute next states correctly.
67. Design a DFA that accepts all binary strings divisible by 3.
1. Use states to represent remainders: q0, q1, q2.
2. q0 = remainder 0 (divisible), q1 = rem 1, q2 = rem 2.
3. Input 0 and 1 update remainder mod 3.
4. Accepting state is q0.
5. Transitions use binary division logic.
68. Differentiate between language and automaton?
1. Language = set of valid strings.
2. Automaton = machine that accepts or rejects strings.
3. Automaton defines a language by accepting it.
4. Many automata can define the same language.
5. Language is abstract; automaton is its model.
69. Show with an example that regular languages are closed under complement.
1. DFA for L: accepts all strings with even 1s.
2. Swap accepting and non-accepting states → L'.
3. New DFA accepts strings with odd 1s.
4. This proves complement of a regular language is also regular.
5. So, regular languages are closed under complement.
70. Define the extended transition function δ^ (delta) and explain its significance.
1. δ^ is the extended transition function for full strings.
2. It shows how a machine processes a string step-by-step.
3. δ^(q, w) = state reached from q by reading w.
4. Important to define DFA/NFA acceptance.
5. Helps prove language properties.
71. Can two DFAs accept the same language but have different structures? Explain.
1. Yes, multiple DFAs can accept the same language.
2. One may be optimized, another redundant.
3. States and transitions may differ.
4. But accepted strings are the same.
5. Example: DFA with 3 states vs 4 states for same pattern.
72. Explain the significance of unreachable states in a DFA.
1. Unreachable states are never used for any input.
2. They make DFA unnecessarily large.
3. Removing them simplifies the DFA.
4. Makes analysis and conversion easier.
5. Important for DFA minimization.
73. Describe the pumping lemma condition that proves a language is not regular.
1. For long enough strings, parts can be “pumped”.
2. String w = xyz, with |xy| ≤ p, y ≠ ε.
3. For all i ≥ 0, xyⁱz must be in L.
4. If any such string is not in L, it's not regular.
5. Helps prove limits of regular languages.
74. Differentiate between recursive, recursively enumerable, and non-recursively
enumerable languages.
1. Recursive: Always halts and decides input.
2. RE: Halts on accepted inputs, may loop on others.
3. Non-RE: No machine can accept it.
4. Recursive ⊆ RE ⊂ all languages.
5. Recursive = Decidable, RE = Semi-decidable.
75. Explain the concept of a Universal Turing Machine (UTM) with a suitable diagram.
1. A UTM can simulate any other Turing Machine.
2. It takes as input a machine description + string.
3. Behaves exactly like that machine on that input.
4. Proves idea of programmable computers.
5. Let me know if you want me to draw and show the UTM diagram.
76. Describe the method of reducing a known undecidable problem to prove the
undecidability of another.
1. Start with a known undecidable problem A.
2. Reduce A to a new problem B.
3. If A ≤ B, and A is undecidable, then B is undecidable.
4. This proves B has no algorithm.
5. Common method in computability theory.
77. Explain the concept of Turing-recognizable languages. How do they relate to RE
languages?
1. Turing-recognizable = Recursively Enumerable (RE) languages.
2. There exists a Turing Machine that accepts strings in L.
3. May loop forever for strings not in L.
4. Recognizable but not necessarily decidable.
5. Useful in semi-decision processes.
78. Illustrate the working of a Linear Bounded Automaton (LBA) with a small example.
1. LBA is a Turing Machine with limited tape space.
2. Tape is restricted to input length.
3. Example: L = {aⁿbⁿcⁿ}, check a-b-c match.
4. It marks and matches each symbol with bounds.
5. Accepts context-sensitive languages.
79. Present the Chomsky hierarchy and explain the relationship between its levels.
1. Type 0: Recursively Enumerable (Turing Machine).
2. Type 1: Context-sensitive (LBA).
3. Type 2: Context-free (PDA).
4. Type 3: Regular (FA).
5. Each type ⊆ previous type (higher power).
80. Compare context-free, context-sensitive, and regular languages based on closure
properties.
Language Type Union Intersection Complement
Regular Yes Yes Yes
Context-Free (CFL) Yes No No
Language Type Union Intersection Complement
Context-Sensitive Yes Yes Yes
1. CFLs are closed under union but not intersection.
2. Context-sensitive are closed under all operations.
3. Regular languages are closed under all too.
4. Closure properties show language power.
5. Important in parser and automaton design.
81. Prove that PCP is undecidable using reduction from the Halting Problem or another
known undecidable problem.
1. Post Correspondence Problem (PCP) is undecidable.
2. We reduce from the Halting Problem (which is undecidable).
3. Given a Turing Machine, encode it as a PCP instance.
4. If PCP had a solution, it would solve the Halting Problem.
5. Thus, PCP is undecidable by contradiction.
82. Discuss the closure properties of recursively enumerable languages. Are they closed
under complement? Justify.
1. RE languages are closed under union and intersection.
2. Not closed under complement.
3. Complement of an RE language might not be RE.
4. Example: L is RE, but L' might not be.
5. So, RE languages lack closure under complement.
83. Define decidability. Provide examples of decidable and undecidable problems.
1. A problem is decidable if an algorithm always halts with yes/no.
2. Example (decidable): “Is string w accepted by DFA?”
3. An undecidable problem has no such algorithm.
4. Example (undecidable): Halting Problem.
5. Decidability tells us if problems are solvable by machines.
84. Explain the role of diagonalization in proving undecidability.
1. Diagonalization is a proof technique by Cantor.
2. Shows there are more problems than algorithms.
3. Used to prove undecidability of Halting Problem.
4. It creates a contradiction using self-reference.
5. Powerful method in theoretical computer science.
85. Describe a Turing machine that accepts the language L = {aⁿbⁿcⁿ | n ≥ 1}. Is it context-
free? Why or why not?
1. TM stores and matches counts of a’s, b’s, and c’s.
2. Replaces a → X, b → Y, c → Z step-by-step.
3. Repeats until all matched; accepts if all are equal.
4. Not context-free because PDA can’t compare 3 counts.
5. Needs full memory → accepted by TM, not by PDA.
86. Discuss why every recursive language is RE but not all RE languages are recursive.
Provide an example.
1. Recursive languages always halt; RE may not.
2. All recursive languages are RE.
3. But some RE languages don’t halt on non-members.
4. Example: Halting Problem is RE, not recursive.
5. Recursive ⊂ RE — strict subset.
87. Explain the concept of mapping reducibility. How is it used in undecidability proofs?
1. Mapping reducibility: A ≤ₘ B if A reduces to B via a function.
2. If B is decidable, A must be decidable.
3. Used to show new problems are undecidable.
4. Reduce a known hard problem to a new one.
5. Helps prove undecidability through transformations.
88. Explain the difference between a Turing-Decidable language and a Turing-Recognizable
language.
1. Turing-Decidable: machine halts for all inputs.
2. Turing-Recognizable: machine halts only on accepted inputs.
3. All decidable languages are recognizable.
4. Some recognizable languages may loop forever.
5. Decidable ⊂ Recognizable.
89. Discuss the concept of undecidability. How does it impact our understanding of
algorithmic limits?
1. Undecidability means no algorithm can solve the problem.
2. There’s no machine that halts for all inputs.
3. Examples: Halting Problem, PCP.
4. Shows limits of what computers can compute.
5. Important in computer science theory.
90. Compare and contrast Context-Free Languages (CFLs) and Context-Sensitive Languages
(CSLs).
1. CFLs use stack memory (PDA); CSLs use linear space (LBA).
2. CFL ⊂ CSL; all CFLs are CSLs.
3. CFLs not closed under intersection; CSLs are.
4. CFLs have simpler grammar (CFG); CSLs need more power.
5. CSLs can handle more complex languages than CFLs.
91. Describe the importance of Linear Bounded Automata (LBA) in recognizing Context-
Sensitive Languages.
1. LBA is a TM with tape bounded to input size.
2. It accepts all Context-Sensitive Languages (CSLs).
3. More powerful than PDA, less than TM.
4. Useful for problems like aⁿbⁿcⁿ.
5. Key model in complexity theory.