Figure 1. Feedback control scheduling architecture for RTDB. The observation principle consists of observing the results obtained by the system and checking if the current QoS observed is consistent with the QoS initially required, e.g. in VoD application, the system checks if the video sequences are presented to users without interruptions. Figure 2. Adapted feedback loop for multimedia applications. Figure 3. Feedback control architecture for distributed multimedia systems. Using this specification, a video stream can express its (m,k)-frame constraint. In fact, the stream packets are labeled as optional, hard optional or mandatory according to their k- frames. To guarantee a minimum QoS of the stream, it is sufficient that all mandatory frames meet their deadlines, i.e. if some optional frames miss their deadline, then this leads only to the degradation of the (k,k)-frame QoS (hard guarantee), but does not affect the required (m,k)-frame QoS. Figure 4. k-frames adaptation. Figure 5. Final GoP after application of (9,12)-frame constraints. Figure 6. A simulator fors distributed multimedia systems. In the two first cases, the replication strategy is not established. In the last case, the case manager, that has to control replication, sends an order to the saturated VS to start the replication. Consequently, the case manager elects a VS among those that answered and that are not saturated. The choice of the VS is done in order to get the best possible QoS. The demand returns back again to the monitor, which then ends the replication process. Afterward, the monitor restarts. received frames, rate of useful frames, rate of lost frames, rate of waiting-frames and rate of served frames) represents the computed average of performance results deduced from each simulation sample. To assess the performances of (m,k)-frame method and replication strategy in comparison to previously proposed QoS approaches, we carried out simulations thanks to a simulator presented in Figure 6. A. Presentation of simulations To have significant periods of simulation and a huge number of measurements at every moment, is a difficult task for the system and the machine on which the simulation takes place. This can cause problems such as memory overflow. To deal with such problems, we fix the number of steps, then the system will calculate the time interval over which we average We studied the ratio behavior of the received client frames and the quality of service of the system. Given the main system parameters (described in Table I), we repeat the experiment 100 times in each simulation in order to obtain a sample of 100 values for the performances, i.e. to obtain significant results for QoS and rates. Each point showed in Figures 7 to 11 (rate of TABLE I. SIMULATION PARAMETERS. In order to analyze the influence of the k-frames on the rate of received frames, we compare the results obtained when using the (m,k)-frame method and when varying the system workload. Figures 7 and 8 illustrate graphically this comparison. The best performances for transmitted frames are obtained when combining the (m,k)-frame method and k-frames technique (cf. Figure 7): the rate of received frame is the most important, i.e. 57%. We can see also in Figure 8 that for all variations of 4 > 0, we obtain the best performances on the rate of useful frames and in rate of waiting-frames, i.e. for all system workload conditions. We can conclude that when increasing the load of transmitted frames, there is no great effect on the received frames, on useful frames and on waiting frames. This result may be explained by the higher priority assigned to mandatory frames (I), which ensures their processing before the other frames classes (P and B). When we look at the performances with k-frames technique (see Figure 7), we notice a progressive decreasing of the rate frames loss when the workload progressively becomes heavy. There is a difference when using only (m,k)-frame classification and when combining this method with the k- frames technique. In the latter case, P frames can be scheduled prior to B frames, which affects and decreases the rate of lost frames and degrades the served frames in the system (cf. Figures 9 and 11). This affects considerably the rate of received frames, especially when the system workload is heavy. In the following, we comment the k-frames performances on three intervals of i, i.e. at different system workloads: i € [0.1, 0.7] (light workload), 4 € [0.8, 1.4] (average workload) and 1 € [1.5, 2.0] (high workload). combining (m,k)-frame and k-frames technique, we obtain better results than (m,k)-frame method only, according to the variations of the parameters of simulation. Figure 1. Elements of Typical IRC BotNet Botnet C&C server has got initiated IRC service which does not differ at all from the standard IRC ones [10]. In most cases Botmaster creates certain IRC channel on server, and all Bot computers are connected to it waiting for the orders (Figure 1.). Inspection of IRC net traffic can detect presence of Botnets in local network, since usage of IRC client/server is usually not allowed in corporative networks. Figure 2. Example of a Web service arhitecture with IPS THE TIME NEEDED FOR GETTING THE RESULTS TABLE VII. DIFFICULTY OF IMPLEMENTATION TABLE IX. LEVEL OF AUTOMATIZATION J CSIS) International J ournal of Computer Science and Information Security, Vol. 2, No. 1, 2009 The new agent communication layer, called the Agent Mentality Layer, lie between the Content Language Layer and the Message Transport Layer. It makes the message transported capable of transmitting mental attitudes of agents. Thus, the mental attitudes could be shared among agents. Ontologies are vocabularies that should be obeyed when agents are communicating with each other. For each agent communication layer in Figure 1, a corresponding operational ontology is also defined. Figure 1: A New Agent Communication Layer Stacks The relationship of the documents is shown in Figure 2. Every description in Figure 2 refers to one another via a URI reference. Every interaction protocol description refers to multiple communicative act descriptions. Similarly, every communicative act refers to an action description as well as a proposition description in the content language description. Figure 3: The Interaction Protocol Ontology Figure 4 illustrates the communication act ontology. In order to define the communicative act ontology, the classification of communicative acts presented here is based on the works of Searle [12]. In our classification, the expressives represent the content, which contains a reason and an action; the commissives represent the content, which contains a condition and an action; the directives represent the content, which only contains an action; and the assertives represent the content, which contains only a proposition. As shown in Figure 6, our architecture contains the Mental Model, Action Engine, Proposition Engine, Communicative Act (CA) Engine, and Protocol Engine. Additionally, the architecture also contains 3APL agent platform as the planner, and Data Model as the ontology storage. Next we will dilate each component in the architecture. Figure 4: The Communication A ct Ontology Figure 5 shows the proposition ontology, which formalizes the agent mental model shared among agents. It is useful to construct the cooperation among agents. The proposition ontology has three main parts as described below: Figure 5: The Proposition Ontology (IJ CSIS) International J ournal of Computer Science and Information Security, Vol. 2, No. 1, 2009 Figure 7: Agent Communication Framework [J CSIS) International J ournal of Computer Science and Information Security Vol. 2, No. 1, 2005 The execution process of the engines and the declarative descriptions is shown in six steps below: By using agent communication description just mentioned, the 3A PL deliberation cycle is enhanced, as shown in Figure 8. Figure 8: A gent Communication Enhanced 3A PL Deliberation Cycle Figure 9: A gent Communication Framework for Semantic Web Services Figure 7, we extended the action ontology to refer to the declarative description, which is represented by OWL-S Service description. Then, we can reuse OWL-S’s process engine [26] and OWLS/UDDI Matchmaker [27] to execute OWL-S process and OWL-S profile, respectively. On this extended architecture, we added: 1) Web service, 2) OWL/UDDI Matchmaker, and 3) OWL-Process Engine. We analyze the three components from the viewpoint of OWL-S: 1) they can be bound to a web service through OWL-S grounding, and they can be used by agents. 2) The OWL/UDDI Matchmaker provides the platform for the OWL- S Profile to register and match, 3) The OWL-Process Engine allows agent to invoke web services according to the OWL-S Process. Then, agent S decides the CA to interact with agent T according to agent attitude. When it receives the request, it will judge FP of CA to response “agree” or “refuse”. If agreed, it will execute the abstract video service. The process is shown in Figure 10. When an agent queries OWL-S/UDDI and receives the response that involves multiple agents. The agent will use coordination to pick a proper agent. Assume agent S queries: Figure 12: Movie Recommendation System This paper defines the operational ontologies for agent communication, along with the engines to interpret the ontology. Further, through communication, agents can coordinate with each other to execute the semantic web services for users. This facilitates the declarative development of agent programs and share mental attitudes. Its advantages are as follow: TABLE I. Feature selection Figure 1. Fast attack framework Figure 2. Threshold Selection process Next, the appropriate threshold is selected based on the comparison from both of the observations and the experiment is compared. Subsequently, the threshold value is verified in the verification process using the SPC approach. A. Observation Technique Figure 3. Network Design for Experiment An experimental environment is designed for the purpose of identifying the ideal static threshold value in a control environment; the network setup is depicted in figure 3. One small local area network has been setup and it consists of linux centos operating system, two Windows XP Professional Service Pack 2 operating system, one fresh install Windows XP Professional Service Pack 2 operating system and one Windows Vista operating system. Figure 4. Statistical Process Control General A pproach Figure 4, shows the general approach proposed using statistical process control technique in validating the hreshold. TABLE IV. Experimental Result TABLE III. Summary of the Mean Normal Connection per Second for Damadd Data TABLE III. Summary of the Mean Normal Connection per Second for IJ CSIS) International J ournal of Computer Science and Information Security, Vol. 2, No. 1, 2005 Figure 5. SPC chart for port (a) 21, (b) 25, (c) 110, (d) 135, (e) 139, (f) 53, (g) 445, In a typical key tree approach [3], [19], [20] as showr in Fig. la, there are three different types of keys: Traffic Encryption Key (TEK), Key Encryption Key (KEK), and individual key. The TEK is also known as the group key and is used to encrypt multicast data. To provide a scalable rekeying. the key tree approach makes use of KEKs so that the rekeying cost increases logarithmically with the group size for a join o1 depart request. An individual key serves the same function as KEK, except that it is shared only by the GC and an individual! member. In the example in Fig. la, KO is the TEK, K1 to K3 are the KEKs, and K4 to K12 are the individual keys. The keys that a group member needs to store are based on its location in the key tree; in other words, each member needs to store 1+log,N keys when the key tree is balanced. For example, in Fig. la member U1 knows KO, K1, and K4 and member U7 knows KO, K3, and K10. The GC needs to store all of the keys in the key tree. Figure 3. Key revocation in the one-way function chain scheme Canetti et al. [4] proposed a variation of LKH by employing a functional relationship among the node keys in a binary key tree along the path from the leaf node representing the leaving member to the root. And this scheme is called as One-way Function Chain(OFC) [9], [18]. OFC reduces the communication overhead from LKH’s 2 log, N — 1 to logy N by introducing a public pseudo-random function G which doubles the size of its input. The left and right halves of G(x) are denoted by L(x) and R(x), so G(x)= L(x)||R(x) where |L(x)| = |R(x)| = |x|. For example, when M;, in Fig. 2 leaves, the KS only sends three rekey messages, from which each residual member can compute all and only the keys it is entitled to receive as depicted in Fig. 3: Figure 2. A binary logical key tree with eight leaf nodes. Figure 4. Key revocation in the iterated hash chain scheme. In Lam-Gouda batch rekeying, members are located in the same position of the tree during all the group life. The only changes permitted are up and down the tree level if sibling members leave the group. In any case, the set of keys that each member has from his position to the root is always the same. In such scenario, improved LKH can be straightly applied cause the only information that members need in order to update the keys is r@® r. We will better explain the adaptation of Improved LKH tc Lam-Gouda Batch rekeying thorough a simple example. Figure 5. Case of several leaving processed at a time Figure 6. A binary hybrid tree with cluster size M = 3and group size N = 24 This study exhibits a clear picture about various key management algorithms. The working principles of each algorithm along with their drawbacks are deeply analyzed and tabulated. From this we can easily identify which algorithm is suitable for particular applications. Especially the hybrid key management scheme combine the best features from various algorithms and try to provide an optimized algorithm which is more flexible when compared with other algorithms. ‘ig. 1. A graph representing an ad hoc network with three destination nodes di, d2 and d3). Fig. 3. Representation of a chromosome Fig. 7. | Number of common links occurred with respect to number of multicast receivers for different network sizes. In this paper, an energy-aware multicast routing algorithm for MANETs using genetic algorithm for constructing an optimized multicast tree that connects the source node and the participants of MCG is proposed. The multicast tree consists of minimum number of links (to reduce end-to-end delay) and passes through common links (to use minimum bandwidth). We also adopted a strategy to balance the energy consumption Fig. 6. Convergence speed for different crossover rates. Fig. 5. Fitness values with respect to the size of the chromosome pool. Fig. 4. Fitness values for different generations. TABLE I CURRENT TIME, DEADLINE , QUANTUM SLICE TIME ALLOCATED EXECUTION TIME CORE TOTAL TIME FOR TASKS The weights for the tasks are calculated as a proportion based on the actual used time for execution. The weight for each task is computed, based on the relation Fig. 1. Conventional EDF Schedule without slack time measures. The algorithm is simulated using Cheddar tool. Cheddar facilitates monitoring task utilisation and corresponding laxity. Cheddar also identifies tasks that have to be dispatched urgently to execution queue. IMPROVEMENT IN TASK UTILISATION FOR NON-UNIFORM LAXITY APPROACH - SIMULATED ON CHEDDAR Table III indicates the percentage improvement in task utilisation for the algorithm proposed in this paper, compared to the conventional EDF. The average task utilisation is 0.75 for the proposed algorithm compared to 0.60 in conventional EDF. It can be seen that task utilisation improves by 31% with increase in number of cores. Fig. 2. Modified EDF Schedule incorporating non-uniform laxity. Fig 4. Snows the scheawme alter incorporating non- uniform laxity approach. Task utilisations are modified by the factor 1.5 + (| Umax - 0.5 |). The maximum of the task set is then computed. From table II una, = 0.32 (vide table II, row 1). Umax is computed as the maximum task utilisation from the modified task utilisations (vide table II). The modification factor for task utilisation is then computed as 1.54(|0.32-0.5]) ie., 1.5+0.18=1.68. Tasks are monitored for the condition U(task;) < (z * (1- (1/e))) so that no task misses the deadline. The value of (z*(1-(1/e))) is 2.528, where Euler's number e is 2.718 and the number of cores z is 4. The modified task utilisation is (1.68*0.71) =1.19 which is < 2.928. Task utilisation for a two core system is always less than (z+1) / 2. The task utilisation for conventional EDF for first two cores computed from fig 1, is 0.87+0.94 = 1.81 which is <2. After incorporating non-uniform laxity, task utilisation for T1 and T4 are 0.64 and 0.54 respectively (derived from fig II). Hence the task utilisation on core 1 is 1.68*(0.64+0.54)=1.98, where 1.68 is the modification factor. Similarly for core 2, core 3 and core 4, the task utilisations are 1.19, 1.71 and 1.78 respectively. The average task utilisation on core 1 is 1.96/2=0.98 as number of tasks is 2 (vide fig 2). Similarly for core 2, core 3 and core 4 the average task utilisations are 1.19, 0.57 and 0.89 respectively. The average task utilisation for the entire task set having task utilisations 0.98,1.19,0.57 and 0.89 is 0.91. This value is the task utilisation of the task set after incorporating non- uniform laxity. IMPROVEMENT IN NUMBER OF TASKS SCHEDULED FOR NON- UNIFORM LAXITY APPROACH COMPARED TO CONVENTIONAL EDF -- SIMULATED ON CHEDDAR IMPROVEMENT IN TASK UTILISATION FOR NON-UNIFORM LAXITY APPROACH - SIMULATED ON SESC Fig. 3. Plot comparing task utilisation for both algorithms with increase in number of cores B Task Schedulability Fig. 4. Plot showing improvement in task utilisation with increase in number of cores using Cheddar Figure 1: Overview MPEG-2 bit stream structure scanning methods are available: zigzag scan which is typical for progressive (non-interlaced) mode processing, and alternate scan which is more efficient for interlaced format video. The list of values produced by scanning is then entropy coded using a variable length code (VLC). Huffman coding is an entropy coding scheme for data compression. Figure 2: MPEG-2 Video Encoder Figure 3: MPEG-2 decoder block diagram LIST OF SUB-MODULES WITH CORRESPONDING TITLES OF MPEG-2 VIDEO DECODER Algorithms for Individual Modules: bandwidth respectively. So from the figure 5 and figure 6 we notice that a minimum of 12200 bits and maximum of 17907 bits of bandwidth is allotted in order to reconstruct the original videos which are taken as an example. Figure 7 depicts the Average Bandwidth Utilized wrt no. of movie songs taken. This graph is drawn only by extracting the values of average frame bandwidth of each sample which are taken in figure 4. Figure 5: Minimum Bandwidth Utilized wrt movie song number compression only MPEG-2 addresses the transmission, or movement, of compressed digital content across a network, and UNIX is preferred for networking applications. Finally we can conclude that the MPEG-2 decoder which is designed is flexible, robust, maintainable and _ effective bandwidth utilization. Figure 6: Maximum Bandwidth Utilized wrt movie song number Figure 7: Average Bandwidth Utilized wrt movie song number Figure 2. Text summarization based on fuzzy logic system architecture The fuzzy logic system consists of four components: fuzzifier, inference engine, defuzzifier, and the fuzzy knowledge base. In the fuzzifier, crisp inputs are translated into linguistic values using a membership function to be used to the input linguistic variables. After fuzzification, the inference engine refers to the rule base containing fuzzy IF- THEN rules to derive the linguistic values. In the last step, the output linguistic variables from the inference are converted to the final crisp values by the defuzzifier using membership function for representing the final sentence score. In order to implement text summarization based on fuzzy logic, first, the eight features extracted in the previous section are used as input to the fuzzifier. We used Triangular membership functions and fuzzy logic to summarize the Figure 3. Membership function of number of words in sentence occurred in title The parameters a and c set the left and right “feet” or base points, of the triangle. The parameter b sets the location of the triangle peak. For instance, membership function of number of words in sentence occurred in title is show in Figure 3. In inference engine, the most important part in this procedure is the definition of fuzzy IF-THEN rules. The important sentences are extracted from these rules according to our features criteria. Sample of IF-THEN rules shows as the following rule. TABLE II COMPARISON OF THE NUMBER OF DOCUMENTS FOR AVERAGE F- MEASURE SCORE FROM DIFFERENT SUMMARIZER TABLE I THE COMPARISON AVERAGE PRECISION, RECALL AND F-MEASURE SCORE AMONG FOUR SUMMARIZERS Table II shows 32.80% of documents from GSM reaches the average f-measure more than 0.50000 while the fuzzy summarizer reaches 36.80% on the other hand 25.60% and 32.80% of Microsoft W ord 2007 and baseline gets the average recall more than 0.50000. Figure 5. The number of documents for average f-measure score from different summarizer Figure 4. Average precision recall and f-measure score among four summarizers The results are shown in Table I, GSM reaches the average precision of 0.49094, recall of 0.43565 and f-measure of 0.45542. The fuzzy summarizer achieves the average precision of 0.49769, recall of 0.45706 and f-measure of 0.47181. While Microsoft Word 2007 summarizer reaches the average precision 0.47242, recall of 0.40778 and f-measure of 0.43026. Baseline reaches an average precision of 0.47002, recall of 0.45624 and f-measure of 0.46108. Figure. 2 illustrates the main steps of our proposed A pproach. First the image preprocessing step perform the localization of The pupil, detects the iris boundary, and isolates the collarette region, which is regarded as one of the most important areas of the iris complex pattern. Figure. 1. Samples of iris images from CASIA [7] The collarette region is less sensitive to the pupil dilation and usually unaffected by the eyelids and the eyelashes [8]. We also detect the eyelids and the eyelashes, which are the main sources of the possible occlusion. In order to achieve the invariance to the translation and the scale, the isolated annular collarette area is transformed to a rectangular block of fixed dimension. The discriminating features are extracted from the transformed image and the extracted features are used to train the classifiers. The optimal features subset is selected using several methods to increase the matching accuracy based on the recognition performance of the classifiers. Figure.2: Flow diagram of the proposed iris recognition scheme The usage of iris patterns for the personal identification began in the late 19th century; however, the major investigations on iris recognition were started in the last decade. In [9], the iris signals were projected into a bank of basis vectors derived by the independent component analysis, and the resulting projection coefficients were quantized as Features. A prototype was proposed in [10] to develop a 1D representation of the gray-level profiles of the iris. In [11], biometrics based on the concealment of the random kernels and the iris images to synthesize a minimum average correlation energy filter for iris authentication were formulated. In [5, 6, 12], the Multiscale Gabor filters were used to demodulate the texture phase structure information of the iris. In [13], an iris segmentation method was proposed based on the crossed chord theorem and the collarette area. Figure.3: CASIA iris images (a), (b), and (c) with the detected Collarette area and the corresponding images (d), (e), and (f) after Detection of noise, eyelids, and eyelashes. Figure.4 : (I) shows the normalization procedure on CASIA dataset; (II) (a), (b) show the normalized images of the isolated collarette regions Only the significant features of the iris must be encoded so that comparisons between templates can be made. Gabor filter and wavelet are the well-known techniques in texture analysis [5, 20, 42, 46, 47]. In wavelet family, Haar wavelet [48] was applied by Jafer Ali to iris image and they extracted an 87-length binary feature vector. The major drawback of wavelets in two-dimensions is their limited ability in capturing Directional information. The contourlet transform is a new extension of the wavelet transform in two dimensions using Multi scale and directional filter banks. To capture smooth contours in images, the representation should contain basis functions with variety of shapes, in particular with different aspect ratios. A major challenge in capturing geometry and directionality in images comes from The discrete nature of the data; the input is typically sampled images defined on rectangular grids. Figure. 5: Two Level Contourlet Decomposition [49] Figure.6: Percent of Fragile Bit in Iris Pattern [52] Biometric systems apply filters to iris images to extract information about iris texture. Daugman’s approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. For a given iris image, a bit in its corresponding iris code is defined as “fragile” if there is any substantial probability of it ending up a 0 for some images of the iris and a 1 for other images of the same iris. According to [52] the percentages of fragile bits in each row of the iris code, Rows in the middle of the iris code (rows 5 through 12) are the most consistent (See Figure. 6.) Table. I: GA Parameters Llov INOGWUUCTLI (Ll) —U And with hamming distance between the vectors of the generated coefficients is calculated. Numbers ranging from 0 to 0.5 for inter-class distribution and 0.45 and 0.6 for intra-class distribution are included. In total 192699 comparisons inter-class and 1679 comparisons intra-class are carried out. In Figure.7 you can see inter-class and intra-class distribution. In implementing this method, we have used point 0.42 the inter-class and intra-class separation point. Figure.7: Inter and Intra Class Distribution Figure 1 model selection to mimic the market [14] p. 223 By viewing the market as a model that takes historical and current information as an input then market participants react to this information based on their understanding, positions, speculations, analysis etc. the aggregation of market participants’ activities will finally translate into output or closing price. In order to imitate the market, a model needs to take a subset of the information available, and try map it to the desirable target, then greater a forecast (figure 3.1), of course with a certain degree of accuracy i.e. with an error[14]. Figure 3.2: A flow chart of the main steps for developing ANN models [17] p. 48 (modified) where: * is the original time series and J is the transformed of X and " is the forecast horizon. As the authors did not explain the pre-processing of the target we chose to test combination of these methods in addition to 3 day moving average filter on the raw data then transform it into relative change. 1 ae For statistical point of view it is essential to have a weak stationary time when modelling for several reasons. First and foremost, the behaviour of stationary time series differs from non-stationary one. As for stationary series a shock will have less influence over time while for non-stationary the influence of shock remains for longer time steps [22]. Furthermore, using non-stationary data could lead to misleading results or so called ‘spurious regressions’ [14, 22]. For ANN model using non-stationary data makes it easier for the network to approximate the general characteristics of the data rather than the actual relationship [14]. Unfortunately, most of economical and financial series are non-stationary, which make transformation into stationary form essential. Several methods of transformation do exist, logarithmic difference, logarithmic return, amongst other. Neuneier & Zimmermann [23] and Gorthmann [24] suggested the use of the combination of first order relative change (equation 3.3) to represent the change in direction momentum, and the second order relative change force (equation 3.4) to represent turning point of the tie series as network input. LEI. HIT RATE AND RMSE FOR IN-SAMPLE AND OUT-OF-SAMPLE In general, as can be seen from Table 4.3 the results were ranging around 50% which is not unusual for noisy data. For noisy data the expectation of the output to be correctly predicted is % +€ of the time [14]. This is mainly for two reasons, the first is the noise in the data’, and the seconds is the limitation of the data sample. Solving noise issue can be done through noise filter such as moving average, while solving the data limitation or lack of information is a more a difficult task, although, in general more feature need to be added to subsidise for the missing information, alternatively if there is any a priori knowledge about the function of the input/ output this can be modelled and as hint to improve the learning process [14]. Unfortunately, this is not the case of oil time sires, therefore we have to rely on filtering the noise. On the other hand, data transformation using eq. 3.4 has generated better results for in and out of sample, as eq. 3.4 contain two steps differencing. Nonetheless, combinations of these transformation methods were tested as well using one lag each. The best combination which outperformed all other options in is momentum eg. 3.3 and force eq. 3.4 as input, and force solely as target. Following this further, only this combination was tested for multiple lags (up to 12 lag form each equation). The hit rate performance was improved about 8% compared to transformation by eq 3.4 as input and target Tee SS ees yp Sa SS our interest on modelling with feedforward network. The Choice of activation function, learning rate, was determined by experiments. The optimisation algorithms ® also was compared and Leveberg-Maquardt was chosen as it approximates the second order, which leads to fast convergence, and higher hit rate compared to first order algorithms like gradient decent. TABLE III. HIT RATE AND RMSE INPUT 3 DAY FOR IN-SAMPLE AND OUT- OF-SAMPLE SLE IV. FUTURES 1 PERFORMANCE AT DIFFERENT LAGS SUMMARY OF PERFORMANCE OF CANDIDATE BENCHMARK In addition, 3 days moving average was applied on the raw data, then data transformed into relative change. Table 4.6 presents the results at different lags after applying the moving average filter. TABLE VI. FUTURES 3 PERFORMANCE AT DIFFERENT LAGS BLEV. FUTURES 2 PERFORMANCE AT DIFFERENT LAGS (IJ CSIS) International Journal of Computer Science and Information Security, Vol.2, No.1, June 2009 TABLE VII. FUTURES 4 PERFORMANCE AT DIFFERENT LAGS TABLE VIII. | FUTURES [ADDED TO THE BENCHMARK C. Multi-steps Forecasts TABLE IX. HIT RATES FOR IN-SAMPLE AND OUT-OF-SAMPLE FOR 3 DAYS FORECAST OF SPOT PRICE Table 4.16 shows that the hit rate for out of sample was acceptable for up of 2 days ahead while on average it was equal to flipping a coin for out-of-sample. Moreover, table 4.17 illustrates the results of adding one lag of futures prices 1, 2,3, and4 months to maturity on top of the benchmark. TABLE X. HIT RATES FOR IN-SAMPLE FOR 3 DAYS FORECAST OF SPOT PRICE ADDING 1 LAG OF FUTURES 1, 2, 3,4 TABLE XI. HIT RATES FOR OUT-OF-SAMPLE FOR 3 DAYS FORECAST OF SPOT PRICE ADDING 1 LAG OF FUTURES 1, 2, 3, 4 By comparing table 4.16 and 4.17 it is clear that futures contracts 1 and 2 improved the direction forecast for out-of- sample for time t+3 while contracts 3, and 4 did not add any information to the spot price. Therefore, according to the results of Theorem 1, the system is globally asymptotically stable. eT ae ee: It can be easily shown that this L-K functional is continuously differentiable and satisfies the hypothesis (i) of Theorem 1. In addition, we have Figure 1. The source flow rate and the capacity of the link for an unstable system Figure 2. The source flow rate and the capacity of the link fora stable system Though they both relate to network security, an IDS differs from a firewall in that a firewall looks out for intrusions in order to stop them from happening. The firewall limits the access between networks in order to prevent intrusion and does not signal an attack from inside the network. An IDS evaluates a suspected intrusion once it has taken place and signals an alarm. An IDS also watches for attacks that originate from within a system. Figure 1: Intrusion Detection System hierarchy ARCHITECTURE OF IDS USING ADVANCED HONYPOT Figure 3. Architecture of IDS using Advanced Honypot Figure 4. Test Network Environment Some intrusion correlation systems do not use a raw data stream (like network or audit data) as input, but instead rely upon alerts and aggregated information reports from IDSs and other sensors [20, 21]. We need to develop systems that can generate realistic alert log files for testing correlation systems. A solution is to deploy real sensors and to “sanitize” the resulting alert stream by replacing IP addresses. Sanitization in general is difficult for network activity traces but it is relatively easy in this special case since alert streams use well defined formats and generally contain little sensitive data (the exception being IP addresses and _ possibly passwords). Alternately, some statistical techniques for generating synthetic alert datasets from scratch are presented by [16]. servers depending on the contents of the client request. 3. Since attacks have to be injected somewhat artificially into the sanitized data stream (regardless of the method used), the attacks will not realistically interact with the background activity. For example, buffer overflow attacks may be launched against a web server and cause the server to crash, but normal background requests to the web server may Figure 1. The architecture of focused crawler TABLET. WEIGHT TABLE Now the crawler tries to expand this initial set of keywords, by adding relevant terms that it has intelligently detected during the crawling process [14]. Among the pages downloaded by focused crawler, those which have been assigned a relevance score greater than or equal to 0.9 by equation (3), are so likely to be relevant to the search topic, because the range of relevance score lies between 0 and 1, and the relevancy increases as this value increases, so web pages whose relevance score is over 0.9 is obviously highly relevant pages. Keyword with highest frequency from each of these pages are extracted and added to the table with weight equal to the relevance score of the corresponding page. Where Wi is the weight of keyword i and Wmax is weight of keyword with highest weight. Table I shows the sample weight table for topic “E-Business”. TABLEII. IRRELEVANT TABLE Fig. 3 shows that page P is irrelevant, so accord to the process of line 1-8 in algorithm, URLs that it contains (a, b, c, d, e) are all extracted and inserted into the table with level value 0 and its calculated link score, which are assumed as 1, 2, 3, 4 and 5 for this example, then sort the table (first five entries in table II). Now page c and e are downloaded, and its extracted urls (f, g and h) are added to the table with level value 1 and its corresponding link score (process of line 9-24 and last three entries in table II). In Table II we have shown an example of the above algorithm, for page P in fig. 3. Figure 3. An example structure of web pages As a solution of this problem we design an algorithm that allows the crawler to follow several bad pages in order to get ¢ good page. The working principle of this algorithm is to go on crawling upto a given maxLevel from the irrelevant page. TABLE III. THE FINAL RATE OF TOPICS Figure 4. Precision of two crawlers for the topic E-Business Figure 5. Precision of two crawlers for the topic Nanotechnology Figure 6. Precision of two crawlers for the topic Politics Figure 7. Precision of two crawlers for the topic Sports We illustrate a crawled result on a two-dimensional graph. This is considered as a good solution since that, fuzzy set theory can represent and manipulate uncertainly and ambiguity [12]. In this paper, we use the Takagi-sugeno fuzzy inference system. This system is composed of two inputs (the 2 components Cb and Cr) and one output (the decision skin or Figure 2. Conversion of an RGB image into YCbCr space : (a) the original image, (b) the Y component, (c) the Cb component and (d) Cr component Figure 3. Fuzzy classification of the image into skin regions and non skin regions non skin color). Each input has three sub-sets: light, medium and dark. Our algorithm uses the concept of fuzzy logic IF- THEN rules; these rules are applied in each pixel in the image in order to decide whether the pixel represents a skin or non- skin region [24]. (Figure 3) represents the input image and the output one after YCbCr space conversion and fuzzy classification. Hand detection is a very important stage to start the step of sign language recognition. In fact, to identify a geste, it is necessary to localize hands and their characteristics in the image. Our system supposes that only one person is presented in the field of the camera, it is described in (figure 1). Figure 1. Steps of hand detection Figure 7. Hand detection using comparison based method When having a unique skin region, we cannot apply any comparison and therefore we assume that it's a hand. This is the major weakness of this method. LJCSIS) International Journal of Computer Science and Information Security, Vol. 2, No. 1, 2005 Figure 6. Hand detection using elliptic method Figure 4. Edge detection with Canny operators Figure 5. A samples of ellipse visualization 1) Ellipse based method: Because human face and hands have an elliptic shapes, an algorithm searching such shape is helpful for face and hand detection. Hough transformation is a well-known shape detection technique of image processing [26][27][28]. So, we use it in our detection algorithm. Figure 9. Morphology operations correction : (a) finding ellipse without correction. (b) finding ellipse after correction Thus, in order to identify it, we compute the number of skir pixels in each ellipse half and retain the appropriate one. We use it to adjust the hand into the vertical direction. Figure 8. Ellipse representation for hand orientation In the fingers extraction phase, we then attempt to remove from the image all the hand components except the fingers. First, we start by drawing a circle passing through the vertices of rectangle found in the previous phase, so we remove all the skin pixels in this circle. This process is made to remove the palm. The part of wist is eliminated by removing all the skin pixels below this rectangle. Figure 11. Removing hand components except fingers Figure 12. Histogram of skin pixels projection Starting from these pixels, we project each one on the x- coordinate axis. For each x coordinate value x, we count the number of skin pixels (x,.), the obtained numbers results on a histogram shown by (figure 12), with normalization by the hand length. Then we localize it in the region having the maximum number of skin pixels. The region bounded by the red rectangle shown by (figure 10) is the palm region. ABLEI. THE GESTURE HAND RECOGNITION RESULTS A good decision tree obtained by a data mining algorithm from the learning dataset should not only produce classification performance on data already seen but also on unseen data as well. In our experiments, in order to ensure the performance stability of our leamed models from the learning data, we thus also tested the learned models on our test data. Figure 13. Classification rate (Figure 13) shows that our approach provides very satisfying results over 90% in our test dataset on the classification accuracy rate, and that ID3 and C4.5 provide the best classification rates. As for us, we retain ID3. Figure 14. Recall rate (ID3 classifier) TICSIS) International Journal of Computer Science and Information Security, Vol. 2, No. 1, 2009 com We emphasize the remaining of our experiments on the recall and precision rates obtained for each digit. Over our experiments, we show that our system provides a good classification rate on some digits, in particular for digits 7 and 2. In fact, according to these digits, the number of peaks is still sufficient for the decision rule, indeed, the classifier ID3 (as well as the others) puts the emphasis on these criteria. Figure 1.System Context Diagram Of Incidence Handling And Response System Using Tripwire for intrusion detection and damage assessment helps you keep track of system changes and can speed the recovery from a break-in by reducing the number of files you must restore to repair the system. Figure 3. Working of tripwire Figure 2.Architecture Of Network Intrusion Detection System Depending upon whether a packet is matched with an intruder signature, an alert is generated or the packet is logged to a file or database. Figure 4. The SSH protocol Figure 1. Generating a feature statistics from an input data set. Figure 2. Computation of MFCCs from an audio signal. MFCCs are the features we used to represent a song model. They are computed directly from an audio waveform using the approach in Figure 2. Mm is the window size in this short-time function. The ZCR curves are calculated as follows: Algorithm Description: Figure 3. Block diagram of Designed Automatic Music Genre Classification System. and beats. This step leads to 8 feature sequences for the whole Mp3 signal. In the sequel, for each of the 8 feature sequences, TABLE I. DISTRIBUTION OF NUMBER OF SONGS IN EACH GENRE IN THE EXPERIMENTAL DATABASE (DATASET A) TABLE II. DISTRIBUTION OF NUMBER OF SONGS IN EACH GENRE IN THE EXPERIMENTAL DATABASE (DATASET B) Thus with DATASET C as the input the system is able to »btain the accuracy of 70% having 6 genre classes and 10 songs. TABLE III. DISTRIBUTION OF NUMBER OF SONGS IN EACH GENRE IN THE EXPERIMENTAL DATABASE (DATASET C) TABLE IV. DISTRIBUTION OF NUMBER OF SONGS IN EACH GENRE IN THE EXPERIMENTAL DATABASE (DATASET D) Figure 4. Accuracies for individual datasets (in %) and overall accuracy of the system. The overall accuracy of the developed system is plotted in Figure 4 which shows the individual accuracies for DATASET A, DATASET B, DATASET C and DATASET D and also the overall accuracy of the system. ATHOURS PROFILE Figure 1: A Gantt-Chart representation of a solution fora 4X 4 problem Table 5 Comparison of GTA and TS performance for the TSP Figure 3 Comparison of GTA and TS performance for the TSP Figure 1. Architecture of IX P2400 The packets are injected into the Network Processor from the network through the Media Switch Fabric Interface. Then the packet data and its meta data are kept in DRAM and SRAM respectively. The packets are then forwarded to the MicroEngines for processing .Finally the processed packets are driven into the network by Media Switch Fabric Interface (MSF) at output port. Figure 3. Throughput Vs various algorithms The packet classification algorithms such as Tuple Space Search (TSS) and Trie Based Tuple Space Search (TTSS) in two different versions have been implemented using single microengine Figure 4. Microengine Idle time Figure 3 shows that at the end of 50,000 microengine cycles (80 micro sec), the number of packets classified in TTSSV1 is 81% and 84% more than the TTSSV2 and TSS respectively Figure 2. Software framework Design ports has been exceeded. If so it stops transmitting on that port and any requests to transmit packets on that port are queued up in localmemory. The Packet Transmitter microblock periodically updates the classifier with information about how many packets have been transmitted. In this paper, microblocks are implemented using microcode. Figure 5. Microengine A bort time ME is the important resource of NP whose utilization factor can be described by its idle time and Aborted time. Figure 4 shows that reduction in Idle time of Microengine ME0:1 is about 17% and 33% in TTSSV1_ classification algorithm when compared to TTSSV2 and TSS respectively . Aborted time of Microengine MEO:1 is the percentage of a total time that was wasted due to instructions that is being aborted. Figure 5 depicts that the Aborted time of the Microengine( ME0:1 ) in TTSSV1 classification algorithm is 13% and 23% lesser than that of TTSSV2 and TSS respectively and implies that Microengine involved in packet classification has been utilized very effectively. Figure 6 shows that Transmit rate of the Network processor with fixed Receive rate ,in TTSSV1 algorithm is 38% and 62.5% more Another issue with Non-Functional Requirements (NFRs) is that most of the times they are stated in natural language. As by definition, each requirement, functional or non functional, must be objective and quantifiable. Secondly there must be some way to measure whether the requirement has been met or not. According to [2], NFR frameworks can be based on Goals. The research begins with identification of hard and soft goals that represent NFR which stakeholders agree upon. Hard goals are easy to incorporate (becomes functional requirements). A. NFR using Likert Scale: The proposed methodology will help in quantifying requirements from simple yes/no to reach more comprehensive feedback. For example, I want to close this project in one month? We can proceed in an open ended way and get some descriptive response but better line of action could be the Likert Scale approach. The options according to Likert Scale are shown in Table-2. In response to the aforesaid question, a list of say 30 possible answers to the topic would be made up. For instance, in response to our question we receive 15 favorable, and 15 unfavorable answers. Participants would then rate each statement on a seven-point scale as follows: A person's attitude is the summed score from each question. TABLE-2: LIKERT SCALE EXAMPLE: TABLE-3: FUZZY LOGIC EXAMPLE However, depending upon requirements, any organization can mold this template for three things i.e. highly reliable, average and low reliable. From the table it is obvious that more emphasis is on recoverability that is why it has been assigned the highest weightage 8. But in an ideal scenario the frequency of failure should be as low as possible (lowest will be zero). Therefore, recoverability should be as high as possible (max could be 10). more, the software will be deemed to be highly reliable. On similar lines, reliability could be measured using a template given in Table-2 and the ideal score will be 8. process. We are convinced that measurement as practiced in other engineering disciplines is IMPOSSIBLE for software engineering. This statement clearly highlights the issues related with all requirements associated with software system. Therefore, our idea behind the research is to make NFRs measurable. Moreover, if we could measure some non functional requirements in absolute terms e.g. response time should be fast; is a vague NFR, whereas, if we say response time should be less than one second, it will become a measurable non functional requirement. So resultantly it can be treated like a normal functional requirement. So, basically our research is going to divide integration point of FRs and NFRs into three requirements. The approach is explained in Figure-2 ie. Figure-1: Handling of Vague Requirements: Fuzzy Logic Example Figure 3: Representing network architecture For instance, we use the below structure for representing network architecture: As it has been mentioned before, ANN is an originally parallel system, therefore requires a parallel programming for its implementation in our research project. In this regard we shall use the MPI standard library which has implemented in C and Fortran Languages. Although there are other tools available, but because of the high speed of written programs in the C language we will use this language for implementing our framework. the execution manager is in charge of coordination of other three units and also it is the manager of simulation execution of the XDANNG in bootstrapping, execution and output phases. This duty will occur with feeding input layer, and summation of output results from output layer of simulated ANN. Figure 1: XDANNG Framework building blocks To define the architecture of this model network in our platform we use such a below file: For instance, imagine the below neural network: }//end of main For the above example we have an input structure like: Other values of neural network such as weight values of each node are also definable for our proposed platform. To define the input values of neural network the below schema is used: To illustrate the output nodes of whole neural network we use the same method, i.e. similar structures to above structure are used to define the final nodes which are the results neural network process will be exported. Figure 1: Integrated Defense System but also, discrete, dynamic, non-linear, stochastic, and large scale in terms of increased number of WS and targets [3], it is difficult to solve it optimally when number of threats and weapons is large, as computation time of the solution increases rapidly with the size of problem. For an efficient TEWA system there is a need to create a balance between usefulness and effectiveness of weapon systems (WSs) [2], [3]. Manual TEWA systems cannot provide optimality because of little amount of information available and established situational picture (limited awareness and information), operator's limitations like vision range constraint, experience, observation, understanding of the situation and mental condition. This is a known fact that humans are prone to errors especially in stressful conditions. As a consequence in military domain, when facing a real attack scenario, an operator may fail to come up with an optimal strategy to neutralize targets. This may cause a lot of ammunition loss with an increased probability of expensive asset damage. Moreover, most semi- automated TEWA systems usually work on target-by-target basis using some type of greedy algorithm thus affecting the optimality of the solution and failing in multi-target scenario [4]. Figure 2 shows graphical representation of TEWA along with two critical concems found in multi-target scenarios. Figure 2: Graphical representation of TEWA along with two critical concerns found in multi-target scenarios optimal threat evaluation and weapon assignment algorithm for multi-target air-bome threats. The algorithm provides a near optimal solution to the defense resource allocation problem while maintaining constraint satisfaction. The model used is kept flexible to compute the most optimal value for all classes of parameters. Figure 3: Hybrid two-stage proposed solution along with an outline of pre- processing phase Figure 4: Boyd's OODA loop for WA From implementation point of view each of this stage can iterate through Boyd’s Observe-Orient-Decide-Act (OODA) loop [6]. Figure4 shows OODA loop for WA. In this section, a new two staged model of Threat Evaluation and Weapon Assignment is proposed. The assignment problem is formulated as an optimization problem with constraints and then solved by a variant of many-to-many Stable Marriage Algorithm (SMA). Since threat and weapon libraries are correlated, using this correlation, this process creates a preference list of WS in general for the assigned threat showing the WSs that are good enough in terms of capability to neutralize assigned threat. Algorithm then searches for matching WSs from the set of WSs in hand. Using threat parameters like heading, course, direction etc, WA finds a subset of WSs found in previous set that have or are expected to have this threat in their range at some timestamp ty. Let this set be represented by WS). For each WS’ belonging to WS,, a temporary pairing of threat and WS’ is made. For each pair, algorithm calculates time to WS’, distance from WS’, tome of flight (TOF for WS’), required elevation angle for WS’, lead calculations and launch points based on velocity of threat and TOF of WS’. For each temporary pair, algorithm calculates the weight of pair using a parametric weighted equation. A proposal is sent to the weapon system WS’ of selected temporary pseudo pair. If it is accepted by the WS’, threat is assigned to WS’ else a new proposal is created and sent to the weapon system WS’ belonging to next pseudo pair. Figure 6 shows the concept of entry point, exit points along with POI calculation for WS. The basic mathematics for WS - threat POI calculations is 50% same as that of DA-threat POI calculation. Since WSs have their own sweep and start angle. While calculation POIs, there is a need to consider WS parameters like Arc boundaries, sweep angle, start angle, elevation angle, field of fire etc. Once POIs are calculated, actual matching is done. This matching is subject to following constraints: \BLE I. SUMMARY OF FEW SIMULATIONS REFERENCES explored and used in a most prolific way. For optimality, these parameter weights are kept configurable. This paper explains the main optimization steps required to react to changing complex situations and provide near optimal weapon assignment for a range of scenarios. Proposed algorithm uses a variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat Evaluation (TE) and Weapon Assignment (WA) problem. TE corresponds to Threat Ranking and Threat-A sset pairing while WA corresponds to finding to best WS for each potential target using a flexible dynamic weapon scheduling algorithm, allowing multiple engagements using shoot-look- shoot strategy. Analysis part of this paper shows that this new approach to TEWA computes near-optimal solution for a range of scenarios. Fig 1.Transformation of relational data into an efficient bitmap representation for attributes with no-binary domains. ="age_young”,A,="age middle” ,A3="age old” (see fig.1) The minimum support of A priori algorithm is 0.45%, and the computation times and the numbers of frequent itemsets found by the two algorithms are shown in Figure 2. Figure 2. (a) No of frequent itemset. (b) Computation Time Anjana Pandey,was born on Dec’18, 1978. She completed her Master in Computer Application. Her special fields of interest included Data mining. Presently she is perusing PhD. In MCA Deptt from MANIT, Bhopal. K..R.Pardasani was born on 13th September 1960 at Mathure, India. He completed his graduation, post graduation and PhD (mathematics) from Jiwaji university gwaliar India. his employment experience includes Jiwaji university Gwaliar MDI, Gurgaon and MANIT Bhopal India. Presently he is professor & Head of mathematics at MANIT, Bhopal and his current interest are data mining and computational biology. The proposed model employs a combination of partitioning approach with Rough sets to mine frequent sequences. It uses a divide and conquer strategy to explore the search space of frequent sequences. This Rough Sets Partitioning (RSP) model is a three-stage strategy; first step is to freeze the time interval with previsualization of a sample of records. Figure 2.0 depicts the interactive window that allows the user to query the transaction table and get a previsualization of a sample of sequences that emerge out of the choice of the event window. The event window can be adjusted accordingly so that a close to the optimum sequence length is obtained for sound predictive implication relations. This event window is freezed by an experienced analyst from the domain under study, as he/she can judge the emerging patterns through previsualization. Fig. 2.0 Sequential Pattern Previsualization It is evident that the length of a sequence depends upon the time interval under consideration. A long time interval will cause a long sequential pattern. In some applications, especially pattern study for managerial decision support, an experienced professional need to view only a sample of sequential patterns to decide the time window for generation of useful implication relations. Many times viewing a sample of sequences can motivate an adjustment in the event window under consideration until finally we find an optimum event window for generation of useful implication relations. t (t, gives the end time. .) is {el,e2}. t, represents the start time and t, and t.: e Fig 4.0 Partitions on the basis of prefix indiscernibility Step 3: Now, we use dynamic frequency counting method in each partition to accumulate frequency of occurance of serial episodes. For this we need to maintain an array of frequencies of the size of all the elements bifurcated in partitions based on indiscernibility relation. We exclude items with cardinality 1 from this process as they are already accounted for step 1. We use property | for frequency accumulation for sequences with cardinality>1. The following steps explain the same: Step 3.1: Scan E Fig. 7.0 Runtime Comparisons on C15-I1-N15-D 10k Fig. 6.0 Runtime comparisons on C 15-I1-N15-D 400 tool proposed in the article. The data comprises of gross faults that have occurred at a landline location in the state of Madhyz Pradesh in INDIA. The results of the method helps in finding sequential patterns of the form “The line disturbance due tc joint fault in SP-DP section results in a Dead Phone” Thus line disturbance due to joint fault in SP-DP section is the antecedent event and dead phone is the subsequent event. Th« results help in proactive maintenance of network health anc cuts down the revenue loss to network downtime. The algorithm can serve as a decision support system for the organization. ABLE II. QUESTIONNAIRE DISTRIBUTION SUMMARY I. CHARACTERISTICS OF DOMAIN EXPERTS Selection of these experts is based upon; the nature of target problem, skills and experience of knowledge engineers, availability and accessibility of domain experts. Details study of each and every technique is beyond the scope of this paper; however many articles are available in literature of knowledge acquisition techniques. By determining the nature of the problem, its major characteristics, availability and accessibility of domain experts, it was decided to use questionnaire as a tool for acquiring knowledge. TABLE IV. WEIGHT ASSIGNMENT TO MAIN FACTORS Figure 1. Knowledge Acquisition Process Different aspects of teacher’s performance which depends upon attributes like Personal, Teaching, Responsibility & Punctuality and Administrative Attributes etc are included in the scale. As an EXPERT in this area, you are requested to examine each item in terms of suitability and then to explain the degree of your agreement to each item whether, in your opinion, it would measure the factors affecting teachers’ performance and to what extent. You may recommend new & delete unnecessary items from the existing scale. Your in-time response will highly be appreciated. Please, use the scale below to mark (v’) your responses in the area provided. I] CSIS) International J ournal of Computer Science and Information Security, Vol. 2, No. 1, 2009 (IJ CSIS) International J ournal of Computer Science and Information Security, Vol. 2, No. 1, 200°