Cloud computing is where software applications, data storage and processing capacity are accessed... more Cloud computing is where software applications, data storage and processing capacity are accessed via the internet. This paper involves with analysis of the importance of cloud computing in software application developed as a service. To achieved that, a web based application OPMaSwas developed to provide service that will improvetheaudit readiness and compliance, couple with conformance to records retention policies and automated record management bundled using SaaS model. The web application is meant to manage personnel of any given organization, providing strong functionalities developed in modules such as; personnel information management module, training information management module, leave management module, resume management module, appraisal management module, document management module, reporting module and payroll management module. This applies to a lot of third world countries especially in Nigeria where interest e-commerce is fast growing. The system was evaluated using...
Humans are generally exposed to various sources of radiation in the course of our day-to-day acti... more Humans are generally exposed to various sources of radiation in the course of our day-to-day activities irrespective of our work environment. Though these non-ionizing radiations from common sources like phones, computers, electronic devices, power lines and wireless devices are not known to present serious health risks, control remains the watchword. This study investigate the level of awareness of the health challenges posed by radiation from mobile phones, computers and other ICT devices among users and to suggest ways of reducing these free radicals from the body. A total of 500 students were used for the study with the aid of “Impact Awareness Questionnaire” developed and tested for face and content validity and a reliability test by internal consistency and stability assessments. The data captured in the study was tested using simple percentages, frequencies and Chi-Square. The result from this study establishes that there is no significant relationship between knowing that IC...
Estimating software cost in an agile system in terms of effort is very challenging. This is becau... more Estimating software cost in an agile system in terms of effort is very challenging. This is because the traditional models of software cost estimation do not completely fit in the agile development process. This paper presents a methodology to enhance the cost of project estimation in agile development. The hybridization adopts Class Responsibility Collaborators models with function point thereby boosting the agile software development estimation process. The study found out that adopting the Hybridized Class Responsibility Collaborator with function point has great improvement on cost estimation in agile software development.
British Journal of Mathematics & Computer Science, 2017
Internal Sorting Algorithms are used when the list of records is small enough to be maintained en... more Internal Sorting Algorithms are used when the list of records is small enough to be maintained entirely in primary memory for the duration of the sort, while External Sorting Algorithms are used when the list of records is large enough to be maintained in physical memory hence a need for external/secondary storage for the duration of the sort. Almost all operations carried out by computing devices involve sorting and searching which employs Internal Sorting Algorithms. In this paper, we present an empirical analysis of Internal Sorting Algorithms (bubble, insertion, quick shaker, shell and selection) using sample comprising of list of randomly generated integer values between 100 to 50,000 samples. Using C++ time function, it was observed that insertion sort has the best performance on small sample say between 100 to 400. But when the sample size increases to 500, Shaker sort has better performance. Furthermore, when the sample grows above 500 samples, shell sort outperforms all the internal sorting algorithms considered in the study. Meanwhile, selection sort has displayed the worst performance on data samples of size 100 to 30,000. As the samples size grows to further to 50,000 and above, the performance of shaker sort and bubble sort depreciates even below that of selection sort. And when the sample size increases further from 1000 and above then shell sort should be considered first for sorting.
British Journal of Mathematics & Computer Science, 2015
_______________________________________________________________________________ Abstract The Unif... more _______________________________________________________________________________ Abstract The Unified Modeling Language (UML) is a general-purpose visual modeling language for specifying software-intensive systems. More precisely, it is a graphical language for visualizing, specifying, constructing and documenting the artifacts of software-intensive systems. UML is a key enabling technology for Software Developers and Software Engineers who seek to transition from traditional, human-intensive, code-centric software development processes to Model-Driven Development (MDD) processes that are requirements-driven and architecture-centric. However, Due to the lack of skills by developers and general purpose nature of UML diagrams, many developers abuse it by drawing diagrams that did not match particular activities or scenarios in the software project. This study makes a review of how UML is abuse and also makes a simple representation of UML diagrams.
International Journal of Advances in Scientific Research and Engineering
Information Technology has grown rapidly leading to challenges with communicating information wit... more Information Technology has grown rapidly leading to challenges with communicating information with nodes at remote locations. There are many communication media, and certain varieties of wired links such as coaxial cables, where multiple nodes can all be linked to hear each other's transmissions either properly, correctly or with some non-zero probability. Generally, there are two rudimentary ways of sharing such stations or media, which can be the time-sharing, and frequency sharing. This paper tackles the rudimentary question of how such one common communication channel also called a shared medium can be distributed among the distinct nodes with maximum productivity. It analyzes wireless sensor networks and its accompanying technologies sighting their pros and cons. Focusing on time-sharing, we examine methodically two prevalent approaches used in obtaining this which are either by time division (split) multiple access (TDMA) or contention protocols of which both approaches are widely being used in today's systems. It also shows that with proper time of selecting the likelihood exigency of Tame Division Multiple Access's (TDMAs), utilization tends to 1/e (37%) but also tends to increase collision. To adapt the transmitted message likelihood, a format for calculation was adopted. The idea was to seek to gather to the favorable point of value. Slotted Aloha has twice the utilization of Unslotted-Aloha when the numbers of accumulated nodes grow. The Study also shows that each node will stake transmission within sprinkling fixed number of slots and this is guaranteed when using a related distribution in a finite window, but this is not the case with geometric distribution.
ABSTRACT: As Grid architecture provides resources that fluctuates, application that should be run... more ABSTRACT: As Grid architecture provides resources that fluctuates, application that should be run in this environment must be able to take into account the changes that may occur. This application must adapt to the changes in Grid environment. Checkpointing is one way to make applications responds to these changes. Though, this can not be done without incurring checkpoint overheads. To reduce these checkpoint overheads and make application run optimally, checkpoint interval and total time to release an executing job in resources must be taken in to consideration. This can be efficiently achieved by use of exception handling. Exception handling, though has its roots in programming language design, can be used to handle fault in a grid environment. The combination of exception handling model with checkpoint parameters can perform optimally in reduction faults in grid resources. Keyword: Checkpoint, Exception Handler, Grid, Commuting Resources, Fault Tolerance.
Cloud computing is where software applications, data storage and processing capacity are accessed... more Cloud computing is where software applications, data storage and processing capacity are accessed via the internet. This paper involves with analysis of the importance of cloud computing in software application developed as a service. To achieved that, a web based application OPMaSwas developed to provide service that will improvetheaudit readiness and compliance, couple with conformance to records retention policies and automated record management bundled using SaaS model. The web application is meant to manage personnel of any given organization, providing strong functionalities developed in modules such as; personnel information management module, training information management module, leave management module, resume management module, appraisal management module, document management module, reporting module and payroll management module. This applies to a lot of third world countries especially in Nigeria where interest e-commerce is fast growing. The system was evaluated using...
Humans are generally exposed to various sources of radiation in the course of our day-to-day acti... more Humans are generally exposed to various sources of radiation in the course of our day-to-day activities irrespective of our work environment. Though these non-ionizing radiations from common sources like phones, computers, electronic devices, power lines and wireless devices are not known to present serious health risks, control remains the watchword. This study investigate the level of awareness of the health challenges posed by radiation from mobile phones, computers and other ICT devices among users and to suggest ways of reducing these free radicals from the body. A total of 500 students were used for the study with the aid of “Impact Awareness Questionnaire” developed and tested for face and content validity and a reliability test by internal consistency and stability assessments. The data captured in the study was tested using simple percentages, frequencies and Chi-Square. The result from this study establishes that there is no significant relationship between knowing that IC...
Estimating software cost in an agile system in terms of effort is very challenging. This is becau... more Estimating software cost in an agile system in terms of effort is very challenging. This is because the traditional models of software cost estimation do not completely fit in the agile development process. This paper presents a methodology to enhance the cost of project estimation in agile development. The hybridization adopts Class Responsibility Collaborators models with function point thereby boosting the agile software development estimation process. The study found out that adopting the Hybridized Class Responsibility Collaborator with function point has great improvement on cost estimation in agile software development.
British Journal of Mathematics & Computer Science, 2017
Internal Sorting Algorithms are used when the list of records is small enough to be maintained en... more Internal Sorting Algorithms are used when the list of records is small enough to be maintained entirely in primary memory for the duration of the sort, while External Sorting Algorithms are used when the list of records is large enough to be maintained in physical memory hence a need for external/secondary storage for the duration of the sort. Almost all operations carried out by computing devices involve sorting and searching which employs Internal Sorting Algorithms. In this paper, we present an empirical analysis of Internal Sorting Algorithms (bubble, insertion, quick shaker, shell and selection) using sample comprising of list of randomly generated integer values between 100 to 50,000 samples. Using C++ time function, it was observed that insertion sort has the best performance on small sample say between 100 to 400. But when the sample size increases to 500, Shaker sort has better performance. Furthermore, when the sample grows above 500 samples, shell sort outperforms all the internal sorting algorithms considered in the study. Meanwhile, selection sort has displayed the worst performance on data samples of size 100 to 30,000. As the samples size grows to further to 50,000 and above, the performance of shaker sort and bubble sort depreciates even below that of selection sort. And when the sample size increases further from 1000 and above then shell sort should be considered first for sorting.
British Journal of Mathematics & Computer Science, 2015
_______________________________________________________________________________ Abstract The Unif... more _______________________________________________________________________________ Abstract The Unified Modeling Language (UML) is a general-purpose visual modeling language for specifying software-intensive systems. More precisely, it is a graphical language for visualizing, specifying, constructing and documenting the artifacts of software-intensive systems. UML is a key enabling technology for Software Developers and Software Engineers who seek to transition from traditional, human-intensive, code-centric software development processes to Model-Driven Development (MDD) processes that are requirements-driven and architecture-centric. However, Due to the lack of skills by developers and general purpose nature of UML diagrams, many developers abuse it by drawing diagrams that did not match particular activities or scenarios in the software project. This study makes a review of how UML is abuse and also makes a simple representation of UML diagrams.
International Journal of Advances in Scientific Research and Engineering
Information Technology has grown rapidly leading to challenges with communicating information wit... more Information Technology has grown rapidly leading to challenges with communicating information with nodes at remote locations. There are many communication media, and certain varieties of wired links such as coaxial cables, where multiple nodes can all be linked to hear each other's transmissions either properly, correctly or with some non-zero probability. Generally, there are two rudimentary ways of sharing such stations or media, which can be the time-sharing, and frequency sharing. This paper tackles the rudimentary question of how such one common communication channel also called a shared medium can be distributed among the distinct nodes with maximum productivity. It analyzes wireless sensor networks and its accompanying technologies sighting their pros and cons. Focusing on time-sharing, we examine methodically two prevalent approaches used in obtaining this which are either by time division (split) multiple access (TDMA) or contention protocols of which both approaches are widely being used in today's systems. It also shows that with proper time of selecting the likelihood exigency of Tame Division Multiple Access's (TDMAs), utilization tends to 1/e (37%) but also tends to increase collision. To adapt the transmitted message likelihood, a format for calculation was adopted. The idea was to seek to gather to the favorable point of value. Slotted Aloha has twice the utilization of Unslotted-Aloha when the numbers of accumulated nodes grow. The Study also shows that each node will stake transmission within sprinkling fixed number of slots and this is guaranteed when using a related distribution in a finite window, but this is not the case with geometric distribution.
ABSTRACT: As Grid architecture provides resources that fluctuates, application that should be run... more ABSTRACT: As Grid architecture provides resources that fluctuates, application that should be run in this environment must be able to take into account the changes that may occur. This application must adapt to the changes in Grid environment. Checkpointing is one way to make applications responds to these changes. Though, this can not be done without incurring checkpoint overheads. To reduce these checkpoint overheads and make application run optimally, checkpoint interval and total time to release an executing job in resources must be taken in to consideration. This can be efficiently achieved by use of exception handling. Exception handling, though has its roots in programming language design, can be used to handle fault in a grid environment. The combination of exception handling model with checkpoint parameters can perform optimally in reduction faults in grid resources. Keyword: Checkpoint, Exception Handler, Grid, Commuting Resources, Fault Tolerance.
Uploads
Papers by Faki Silas