DATA AND FILE STRUCTURES
DATA STRUCTURE
Data are represented by data values held temporanily within programme's data area or recorded permanently on a file . Often the different data values are related to each other.
To enable programmes to make use of these relationships, these data values must be in an organised form. The organised collection of data is called a data structure.
The programmers have to follow certain rules to access and process the structured datda . We may,therefore , say data are represented as:
Data Structure = Organised Data + Allowed Operations
There are standard data structures which are often used in their own right and can form the basis for complex data structures .
One such basic data structure called Array.
Arrays are building block for more complex data structures .Designing and using data structures is an important programming skill.
Asymptotic Notations :
Time and space complexity is measured in terms of asymptotic notations.
For example let us consider a program which stores n elements.
void store()
{
int i,n;
printf("enter the number of elements ");
scanf("d",&n);
for(i=0;i<n;i++)
{
//store the elements
}
}
In the above program space is requiredd to store the executable code and to store the 'n' number of elements .
The memory that is required to store the executable code is static .
The memory required to store the 'n' elements depends on the value of 'n'.
The time depends on the value of 'n'.
The time required to execute the code also depends on the value of ' n'.
In the above program we have two executable statements printf and scanf.
Let us assume there are x statements in the for loop.
Then the time required to execute the program will be equal to x* n + 2.
'n' is the instance characteristics .
Asymptotic notation can describe the time and space complexity accurately for large instance characteristics.
Asymptotic notation will provide us with upper and/or lower time and space bounds for a given algorithm .
However what do you mean by upper and lower bounds?
--Upper bound will give the maximum time or maximum space required for a program.
--Lower bound will give the minimum time or minimum space required.
Big Oh notation :
Definition: f(n) =O(g(n)) (read as "f of n equals big oh of g of n") iff (if and only if ) there exists two positive constants c and n0 such that
| f(n) | <= c | g(n) | for all n>= n0.
Big Oh notation provides an upper bound for the function f. Consider f(n) to be the computing time of some algrothim where n is the number inputs,outputs,their sum or any one of them.
When we say that an algorithm has computing time O(g(n)) we mean that if the algorithm is run on any computer on the same type of data but for increasing values of n, the resulting time will always be less than some coanstant time g(n).
Let us consider an example to understand the above concept.
(1) Linear Functions : Consider f(n) =5n + 2 .
When n is at least 2, 5n +2 <= 5n +n <= 6n.
So f(n) = O(n) .
Thus f(n) is bounded from above by a linear function.
We can arrive to the same conclusion in other ways.
For example , 5n + 2 <= 10n for n>=0.
Therefore we can also satisfy the definition of big oh by selecting c=10 and n0 equal to any integer greater than zero.
(2) Quadratic Functions :Suppose that f(n) = 5n2 + 4n +2 .
We see that for n>= 2 , f(n) <= 5n2 + 5n.
Now we note that for n>= 5 , 5n<=5n2.
Hence for n>=n0=5, f(n0<= 5n2 + n2 = 11n2.
Therefore f(n) = O(n2).
Omega Notation :
Definition : f(n) = omega ( g (n) ) ( read as "f of n is equal to omega of g of n ) iff positive constants c and n0 exists such that f(n) >= g n0 for all n ,n>=n0.
Omega notation provides the lower bound for function f.
f(n) = omega (g(n)) means f is at least c times the function g except when n is the smallest than n0.
For example suppose we have a linear function f(n) = 3n + 2 .
Now 3n consider a quadratic function f(n) =5n2 + 2n +2 then 5n2 + 2n + 2> 5n2 for n>=0 ,hence we can say f(n0) = omega (n2).
Theta Notation:
Definition : f(n0) = theta ( g (n) ) ( read as "f of n is equal to theta of g of n )
- iff positive constants c1 and c2 and (n0) exists such that c1 g(n) <= f(n) <= c2 g (n) for all n ,n>=n0.
f(n) = theta (g(n)) means f lies between c1 times the function g and c2 times the function g except possibly when n is the smallest than (n0).
Here c1 and c2 are positive constants.
Thus g is both a lower and upper bound for a constants factor c on the value of f for all suitably large n.
DATA STRUCTURE
Data are represented by data values held temporanily within programme's data area or recorded permanently on a file . Often the different data values are related to each other.
To enable programmes to make use of these relationships, these data values must be in an organised form. The organised collection of data is called a data structure.
The programmers have to follow certain rules to access and process the structured datda . We may,therefore , say data are represented as:
Data Structure = Organised Data + Allowed Operations
There are standard data structures which are often used in their own right and can form the basis for complex data structures .
One such basic data structure called Array.
Arrays are building block for more complex data structures .Designing and using data structures is an important programming skill.
Asymptotic Notations :
Time and space complexity is measured in terms of asymptotic notations.
For example let us consider a program which stores n elements.
void store()
{
int i,n;
printf("enter the number of elements ");
scanf("d",&n);
for(i=0;i<n;i++)
{
//store the elements
}
}
In the above program space is requiredd to store the executable code and to store the 'n' number of elements .
The memory that is required to store the executable code is static .
The memory required to store the 'n' elements depends on the value of 'n'.
The time depends on the value of 'n'.
The time required to execute the code also depends on the value of ' n'.
In the above program we have two executable statements printf and scanf.
Let us assume there are x statements in the for loop.
Then the time required to execute the program will be equal to x* n + 2.
'n' is the instance characteristics .
Asymptotic notation can describe the time and space complexity accurately for large instance characteristics.
Asymptotic notation will provide us with upper and/or lower time and space bounds for a given algorithm .
However what do you mean by upper and lower bounds?
--Upper bound will give the maximum time or maximum space required for a program.
--Lower bound will give the minimum time or minimum space required.
Big Oh notation :
Definition: f(n) =O(g(n)) (read as "f of n equals big oh of g of n") iff (if and only if ) there exists two positive constants c and n0 such that
| f(n) | <= c | g(n) | for all n>= n0.
Big Oh notation provides an upper bound for the function f. Consider f(n) to be the computing time of some algrothim where n is the number inputs,outputs,their sum or any one of them.
When we say that an algorithm has computing time O(g(n)) we mean that if the algorithm is run on any computer on the same type of data but for increasing values of n, the resulting time will always be less than some coanstant time g(n).
Let us consider an example to understand the above concept.
(1) Linear Functions : Consider f(n) =5n + 2 .
When n is at least 2, 5n +2 <= 5n +n <= 6n.
So f(n) = O(n) .
Thus f(n) is bounded from above by a linear function.
We can arrive to the same conclusion in other ways.
For example , 5n + 2 <= 10n for n>=0.
Therefore we can also satisfy the definition of big oh by selecting c=10 and n0 equal to any integer greater than zero.
(2) Quadratic Functions :Suppose that f(n) = 5n2 + 4n +2 .
We see that for n>= 2 , f(n) <= 5n2 + 5n.
Now we note that for n>= 5 , 5n<=5n2.
Hence for n>=n0=5, f(n0<= 5n2 + n2 = 11n2.
Therefore f(n) = O(n2).
Omega Notation :
Definition : f(n) = omega ( g (n) ) ( read as "f of n is equal to omega of g of n ) iff positive constants c and n0 exists such that f(n) >= g n0 for all n ,n>=n0.
Omega notation provides the lower bound for function f.
f(n) = omega (g(n)) means f is at least c times the function g except when n is the smallest than n0.
For example suppose we have a linear function f(n) = 3n + 2 .
Now 3n consider a quadratic function f(n) =5n2 + 2n +2 then 5n2 + 2n + 2> 5n2 for n>=0 ,hence we can say f(n0) = omega (n2).
Theta Notation:
Definition : f(n0) = theta ( g (n) ) ( read as "f of n is equal to theta of g of n )
- iff positive constants c1 and c2 and (n0) exists such that c1 g(n) <= f(n) <= c2 g (n) for all n ,n>=n0.
f(n) = theta (g(n)) means f lies between c1 times the function g and c2 times the function g except possibly when n is the smallest than (n0).
Here c1 and c2 are positive constants.
Thus g is both a lower and upper bound for a constants factor c on the value of f for all suitably large n.