0% found this document useful (0 votes)
14 views8 pages

SciComp Notes

Decidability refers to whether a problem can be solved algorithmically, with undecidable problems lacking a general solution algorithm. Examples include the Halting Problem and the Post Correspondence Problem, both of which cannot be resolved by any algorithm. The document also discusses closure properties of decidable languages under operations like difference and mapping, and provides an overview of time and space complexity classifications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views8 pages

SciComp Notes

Decidability refers to whether a problem can be solved algorithmically, with undecidable problems lacking a general solution algorithm. Examples include the Halting Problem and the Post Correspondence Problem, both of which cannot be resolved by any algorithm. The document also discusses closure properties of decidable languages under operations like difference and mapping, and provides an overview of time and space complexity classifications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Decidability

Definition: decidability deals with the question of whether a specific problem can be
solved algorithmically by a computer. In simpler terms, it's about figuring out if there
exists a systematic procedure, or algorithm, that, given any instance of the problem,
will eventually provide the correct answer. It is a broader concept that encompasses
various classes of languages, including decidable languages, regular languages, and
context-free languages

What is undecidability: A problem is considered undecidable if there is no algorithm


that can always correctly determine the solution for every possible instance of that
problem. In other words, undecidability implies that there is no systematic and
general procedure (algorithm) that can answer "yes" or "no" for all instances of the
problem.

Examples of undecidability:
1. The Halting Problem is one of the most famous undecidable problems. It
states that there is no general algorithm that can determine, given an arbitrary
description of an arbitrary computer program and an input, whether the
program will eventually halt (i.e., stop running) or continue running indefinitely.

2. The Post Correspondence Problem is another example of an undecidable


problem. Given a set of tiles, each with a string written on its two sides, the
question is whether there exists a sequence of these tiles, when arranged,
such that the strings on the top and bottom match when read horizontally.
Formally, given a set of pairs of strings (w_1, x_1), (w_2, x_2), ..., (w_k, x_k), is
there a sequence i_1, i_2, ..., i_n such that w_{i_1}w_{i_2}...w_{i_n} =
x_{i_1}x_{i_2}...x_{i_n}? The Post Correspondence Problem was shown to be
undecidable by Emil Post in 1946.

Conclusion:
Decidable (or computable): If there exists an algorithm that, for every possible input,
correctly decides whether the input is in the language (accepts) or not in the
language (rejects), then the problem is said to be decidable. In other words, a
language is decidable if there is an algorithm that always halts and provides the
correct answer.
Undecidable: If there is no algorithm that can always correctly decide whether an
arbitrary input is in the language or not, the problem is considered undecidable. The
classic example is the halting problem, where there is no algorithm that can decide,
for every program and input, whether the program will eventually halt or run forever.
Sample Question
for which of the following operations for the class of decidable
languages is closed:
1) difference of two decidable language
2) mapping of one decidable language to another decidable language.
prove or give a counter example

Class of Languages
Definition: This refers to a collection of languages that share certain properties. For
example, it could be the class of decidable languages, regular languages,
context-free languages, etc.
Closure Property
When the question asks whether a class of languages is closed under certain
operations, it's essentially asking whether applying those operations to languages
within that class preserves a certain property, such as decidability.

a) Closed Under Difference: If a class of languages is closed under the


difference operation, it means that if you take two languages from that class,
and find their set difference, the resulting language is still in the same class.
b) Closed Under Mapping: If a class of languages is closed under mapping, it
means that if you have a function that maps strings from one language to
another, and the original language is in that class, then the resulting language
after mapping is also in that class.

1) The first question is asking whether, when you take the set difference of two
languages from a class of decidable languages, the resulting set (language) is
still decidable and still stays within the same class of language.
Given two languages L1 and L2, the difference means L1 - L2.
If a class of languages is said to be "closed under difference," it means that
when you take any two languages L1 and L2 from that class, the difference L1
- L2 is still a language within the same class.
Example:
If L1 is the set of even numbers and L2 is the set of multiples of 3. Then both
L1 and L2 are decidable languages.
The difference between L1 and L2 are set of even numbers that are not
multiples of 3. If there exists an algorithm to decide whether a number is even
and not a multiple of 3, then L1 - L2 is a decidable language
2) The second question is asking whether, when you have a function that maps
strings from one language to another, and both languages are decidable, the
resulting language after mapping is still decidable and still stays within the
same class of language.
Let's say we have L1 and L2, and we want to turn L1 and L2 using the function
“f(x)”. We first input x and if x is not in L1, reject. Otherwise, apply the function
to x and if the product of the procedure is in L2, then accept. Otherwise reject.
This can only work if the function is computable and if it's not computable
then it is not decidable and obviously will not be in the same class (as there is
no product to determine whether it is in the same class)

1) We can use Cantor’s diagonal argument by making all possible infinite


sequences of 0s and 1s.
We first assume that A is countable and let's say you have a list like this:

0.0110101...
0.0110101...
1.1011101...
1.1011101...
0.1100101...
0.1100101...
.
.
.
...
Now, we want to show that there are more sequences than we can list. We'll
create a new sequence that's not in our list. For each position in the new
sequence, we'll pick the opposite of whatever digit is in that position for every
sequence in the list.

For example, if our new sequence is


1.1001010...
1.1001010..., it's different from the first sequence in the first position, different
from the second sequence in the second position, and so on.

So, this new sequence we've created is not on our original list. This means our
original list can't include all possible sequences. So there is a contradiction
and since the assumption of countability leads to a contradiction, then the set
A of all infinite sequences over {0,1} is uncountable.

2) Cartesian Product: Form the cartesian product A×B, where A is the set of all
infinite sequences over {0,1}.

Uncountability Claim: The statement asserts that A×B is uncountable.

Proof: To prove this, you could use a similar proof technique as Cantor's
diagonal argument adapted for pairs of sequences. The key idea is to assume,
for contradiction, that you can list all pairs from A×B (assume that A x B is
countable) and then construct a pair that is not in the list, showing that the
assumption of countability leads to a contradiction.

This proof doesn't assume the countability of A or B individually; it focuses on


the uncountability of their cartesian product.

Space complexity
● Definition: Space complexity is a measure of the amount of memory (space)
an algorithm needs to complete as a function of the size of the input.
● Notation: Like time complexity, it is often expressed using Big O notation (e.g.,
O(f(n))), where "f(n)" represents an upper bound on the growth rate of the
algorithm's memory usage.
● Example: If an algorithm uses a constant amount of memory regardless of the
input size, its space complexity is O(1). If the memory usage grows linearly
with the input size, it might be O(n).

O(1) - Constant Space:


● Definition: An algorithm has constant space complexity if its memory usage
remains the same regardless of the size of the input.
● Example: Storing a fixed number of variables or a constant amount of data
regardless of input size.

O(log n) - Logarithmic Space:


● Definition: The space requirements grow logarithmically as the size of the
input increases.
● Example: Recursive algorithms that divide the problem into smaller
subproblems, each requiring a logarithmic amount of space.

O(n) - Linear Space:


● Definition: The space usage is directly proportional to the size of the input.
● Example: Algorithms that use an array or data structure where each element
corresponds to an input element.

O(n log n) - Linearithmic Space:


● Definition: The space requirements grow linearithmically with the size of the
input.
● Example: Merge Sort or other divide-and-conquer algorithms where each
recursive call requires linearithmic space.

O(n^2) - Quadratic Space:


● Definition: The space usage is proportional to the square of the input size.
● Example: Algorithms with nested loops that use a two-dimensional data
structure.

O(2^n) - Exponential Space:


● Definition: The space requirements double with each addition to the input
size.
● Example: Brute-force algorithms that generate all possible combinations or
permutations.

O(n!) - Factorial Space:


● Definition: The space usage grows as the factorial of the input size.
● Example: Algorithms that generate all possible permutations or combinations.

Time Complexities
● Definition: Time complexity is a measure of the amount of time an algorithm
takes to complete as a function of the size of the input.
● Notation: It is typically expressed using Big O notation (e.g., O(f(n))), where
"f(n)" represents an upper bound on the growth rate of the algorithm's running
time.
● Example: If an algorithm takes constant time regardless of the input size, its
time complexity is O(1). If the running time grows linearly with the input size, it
might be O(n).
O(1) - Constant Time:
● Definition: An algorithm has constant time complexity if its runtime remains
the same regardless of the size of the input.
● Example: Accessing an element in an array by index, performing a simple
arithmetic operation.

O(log n) - Logarithmic Time:


● Definition: The running time grows logarithmically as the size of the input
increases.
● Example: Binary search on a sorted array, where each step reduces the search
space by half.

O(n) - Linear Time:


● Definition: The running time is directly proportional to the size of the input.
● Example: Linear search in an unsorted array, traversing elements in a linked
list.

O(n log n) - Linearithmic Time:


● Definition: Often seen in algorithms that divide the problem into smaller
subproblems, each of which takes linearithmic time to solve.
● Example: Merge Sort, Heap Sort.

O(n^2) - Quadratic Time:


● Definition: The running time is proportional to the square of the size of the
input.
● Example: Bubble Sort, Selection Sort - both involve nested loops.

O(2^n) - Exponential Time:


● Definition: The running time doubles with each addition to the input size.
● Example: Brute-force algorithms that explore all possible combinations.

O(n!) - Factorial Time:


● Definition: The running time grows as the factorial of the input size.
● Example: Algorithms that generate all permutations or combinations.
Question:
Analyse the time and space complexities of the example_algorithm function in terms
of the input size n.

Answer:

Time Complexity Analysis:

The first loop (for i in range(n)) iterates through the array arr with a constant amount
of work for each element. This results in a linear time complexity.
Therefore, the time complexity is O(n).

Space Complexity Analysis:


The variable n takes constant space.
The variable sum_values accumulates values from the array, and its space
requirement grows with the input size.
Since there are no additional data structures used that scale with the input size, the
space complexity is also O(1) (constant).
In summary, the time complexity is O(n) and the space complexity is O(1) for the
provided example_algorithm function.

You might also like