Papers by Fernando Camutari MSc/EngTech/CEng/MBCS/MIEEE
A Dichotomyof Database Cloud Providers Services, 2022
In this document, one aims to analyse the databases across different cloud platform providers. On... more In this document, one aims to analyse the databases across different cloud platform providers. One will look at the differences, and similar features across AWS, Azure, and GCP. These sophisticated and useful databases services are a core enabler to creating and deploying applications and systems that interact with such database services. One will also look at the database continuous improvement methodology with aim of providing an adequate method of continuous improvement for Magellan Robotech DB Estate which is made of SQL server and PostgreSQL databases.
Wiley Encyclopedia of Computer Science and Engineering, 2009
wiki.dcs.shef.ac.uk
Background: There has been a massive generation of biological data due to the high complexity of ... more Background: There has been a massive generation of biological data due to the high complexity of genome and post genomic technologies. Through the implementation of a biological model, one is able to extrapolate and understand the different biological systems within ...
MySQL 5.7 Database Engine Upgrade & Test Plan, 2021
MySQL 5.7 has been GA since October 2015. At the time of writing, it is still a very new
release... more MySQL 5.7 has been GA since October 2015. At the time of writing, it is still a very new
release. But more and more companies are looking into upgrading, as it has a list of
great new features. Schema changes can be performed with less downtime, with more
online configuration options. Multi-source and parallel replication improvements make
replication more flexible and scalable. Native support for JSON data type allows for
storage, search and manipulation of schema-less data. In addition, the granularity of the
scalability is more concise. An upgrade, however, is a complex process - no matter
which major MySQL version you are upgrading to. Upgrading to a new major version
involves risk, and it is important to plan the whole process carefully. In this document,
we’ll look at the important new changes in 5.7.
Changes between MySQL 5.6 and MySQL 5.7, 2021
MySQL 5.7 has been GA since October 2015. At the time of writing, it is still a very new
release.... more MySQL 5.7 has been GA since October 2015. At the time of writing, it is still a very new
release. But more and more companies are looking into upgrading, as it has a list of
great new features. Schema changes can be performed with less downtime, with more
online configuration options. Multi-source and parallel replication improvements make
replication more flexible and scalable. Native support for JSON data type allows for
storage, search and manipulation of schema-less data. In addition, the granularity of the
scalability is more concise. An upgrade, however, is a complex process - no matter
which major MySQL version you are upgrading to. Upgrading to a new major version
involves risk, and it is important to plan the whole process carefully. In this document,
we’ll look at the important new changes in 5.7.
The imperative objective of this e-commerce project was to develop a java based open access ecomm... more The imperative objective of this e-commerce project was to develop a java based open access ecommerce journal website. The e-journal will is accessible freely to any user to aid the submission, dissemination and distribution of results. However, the administrative process and maintenance of the e-journal has a model that allows potential authors to take the burden of running the website in a much productive way as fit.
It is recommended that specifications are produced as there are two target audiences that need to... more It is recommended that specifications are produced as there are two target audiences that need to be informed of the how the data will be migrated. The first audience is the person managing the data migration. Such will need information of which data is being migrated and how, as well as the target of the data. The second audience are the SQL developer, the technical engineer who is responsible for extracting such data. The engineer will need to be informed of the data to be migrated as well as the format of the data being migrated.
Coding standards are conventions and methods developers follow when developing, editing or mainta... more Coding standards are conventions and methods developers follow when developing, editing or maintaining program code. TSQL Code must work properly and efficiently. That's not enough though. One need to comment and document the TSQL code, it must be inherently readable, well laid out, utilisation of informative and obvious names, is imperative and it must be robust and resilient; written defensively. It must not rely on deprecated features of SQL Server, or assume particular database settings. Better programming style, developers understanding, readability and reduced application development time are the results of following coding standards. It ensures that, the former and latter are technical assets of high quality that aid in the reusability and implementation of efficient software products vis-a-vi data migration. databases as well as the implementation of data migration, data analytics and business reports. The general process is:
This research study, explores the pros and cons of management of IT projects, specifically softwa... more This research study, explores the pros and cons of management of IT projects, specifically software project management. Furthermore, this research paper looks vividly at the imperativeness of software management. From an angle of why some software engineering projects are successful, and others are not successfully. One of the main probing question that arises, through this research is whether lack of computer science knowledge by project managers is a real factor, in regards to some software projects being successful, and others not successfully. In addition, this study investigates and presents an IT project management life cycle model with software engineering facets.
Programming is an art that enables, one to elaborate or instruct ones aim to compute specific fac... more Programming is an art that enables, one to elaborate or instruct ones aim to compute specific facets. The adoption of functional programming as a mainstream language is a challenge. However, with great rewards. Functional programming defines the concepts of problems that one rationalises for developing robust, scalable as well as for example concurrent systems. This research study, considers the following: why is functional programming the future for software engineering? And why is functional programming, receiving more attention than mainstream languages such as C++ and others.
One of the imperative values that make data warehouse systems vital for an organisation is the ca... more One of the imperative values that make data warehouse systems vital for an organisation is the capability to aid in the decision making process phase, and enabling the analysis of the status, as well as development of an organisation Golfarelli, M (2010). The classification of systems within an enterprise institution has a great harmony between various academics, in that there are two types of systems; operational systems which allow the implementation, deployment of additional systems, and are responsible for multiple daily operations. And information systems which it’s solely purpose is to store, retrieve and manage huge quantity of data which is reused for various transactions or purposes (e.g. translate data into information, information into facts and store it). This classification of systems is controversial among many scholars specifically Zureck et al (1999), who challenged it by declaring that this distinction is misleading, because it is not easily differentiated, and is sometimes distorted. For example; intelligent systems or information systems are sometimes used for other transactions which have not been implemented for it, but is capable of conducting it. Zureck et al (1999) notes that data warehouses are not orthogonal as the literature implies.
Software testing and measurement are not separate engineering process facets – they interconnect ... more Software testing and measurement are not separate engineering process facets – they interconnect within the lifecycle of a system development. The testing lifecycle process is not just a process of debugging or identifying defects within a program. The same can be said of change management, which is one of the fundamental processes of a management dichotomy of a project. This is due to the fact that change management provides the means for developers and project managers to document the change requests for each development artefacts, within a software development lifecycle.
However, it is imperative that one is comfortable and well adept with the testing lifecycle, which in its entirety cannot slow the absence of defects: it can only show that software defects are present.
Agent based modelling provides a vital scientific understanding of differing systems and an addit... more Agent based modelling provides a vital scientific understanding of differing systems and an additional framework with sophisticated innovative paradigms which allows the modelling of dynamic systems as well as natural systems. The specific objectives of this analyses and investigation were to use an existing Flame model, and use it for some virtual experiments which would enable the scientific analyses of keratinocyte colony formation, with the increasing of agents and alteration of key parameters such as cell cycle length and probability. Furthermore, one of the main objectives was also to determine the keratinocyte colony formation capacity, of keratinocyte in vitro and in virtuo models for normal human keratinocytes (NHK), this study will also explore the existing model to contrast the effectiveness of extracellular calcium level, analyse the cell seeding densities and see how cell culture is enhanced. We‟ll aim to analyse the negative and positive output of keratinocyte colony formation capacity, due to the changes in the cell cycle length of stem cells of keratinocyte colony formation and differentiation. With the deliberate purpose of broadening ones understanding and insight as to how changes to the latter and former do help human tissue healing in various demographics.
This paper explores and presents a novel scientific paradigm to the ethics, methodologies, and di... more This paper explores and presents a novel scientific paradigm to the ethics, methodologies, and dichotomy of autonomous military robot systems used to advance and dynamically change how warfare in the twenty first century is conducted; and judged from an ethical, moral and legal perspective with the aim of creating a new concept through a scientific survey.
Background: There has been a massive generation of biological data due to the high complexity of ... more Background: There has been a massive generation of biological data due to the high complexity of genome and post genomic technologies. Through the implementation of a biological model, one is able to extrapolate and understand the di_erent biological systems within reach. One of the most advanced paradigms that has evolved and has been adopted by the scienti_c community within this _eld is the use of scienti_c workows systems WMSs such as Taverna and additional sophisticated tools like R and Bioconductor packages for biological data analysis. Many web-based analysis services have been created for biological data analysis but a web interface that incorporates Taverna with R/Bioconductor is still missing.
Results: We have designed Darwin-BioWeb which is a web-based portal for biological data analysis. Its main architecture consists of four main components: the Workow Manager, the Taverna Server, the R Server and the Graphical User Interface. The Workow Manager provides services for workow selection, monitoring the current status of a workow and fetching the results using two portlets. The R server is used to perform back-end computation using Bioconductor. The Taverna Server executes submitted workows remotely. The User Interface allows the selection and execution of workows as well as the uploading of input data. Results can be viewed or saved explicitly on their computers. Users are able to perform microarray analysis through a prede_ned workow which includes methods for normalisation, di_erential expression, clustering and di_erential expression.
Conclusions: We have implemented a portal for biological data analysis which is called Darwin BioWeb. The system supports several methods such as normalisation, di_erential expression, clustering and visualisation through a pipeline. These methods allow biologists to simulate speci_c experiments without prior knowledge of informatics and statistics.
There has been a massive generation of biological data due to the sophistication and structures o... more There has been a massive generation of biological data due to the sophistication and structures of genome and post genomic technologies. Through the implementation of a bi-ological model, one is able to extrapolate and understand the dierent biological systems within reach. One of the most advanced paradigms that has evolved and has been adopted by the scientic community within this eld is the use of scientic work ows systems WMSs
such as Taverna and additional sophisticated tools like R and Bioconductor packages for bio-logical data analysis. The concept of scientic work ows is solving the way specic in `silico'
experiments are conducted due to the sophistication to simulate various types of manipulation and data analysis. The present computational analysis of various complex biological data through analytic methods, and their manipulation has become time consuming, and prone to introduce analytical errors due to advances in genomic and post-genomic technologies. To address this problem, an automatic generation of a web portal to assimilate, process the adequate data and display through a web interface eciently and accurately is required.
Thesis Chapters by Fernando Camutari MSc/EngTech/CEng/MBCS/MIEEE
AVA E-COMMERCE ': ONLINE JOURNAL IMMPLEMENTATION, 2011
The imperative objective of this e-commerce project was to develop a java based open access e-com... more The imperative objective of this e-commerce project was to develop a java based open access e-commerce journal website. The e-journal is accessible freely to any user to aid the submission, dissemination and distribution of results. However, the administrative process and maintenance of the e-journal has a model that allows potential authors to take the burden of running the website in a much productive way as fit.
Based on the policy of the Department of computer Science at the University of Sheffield, various analyses related to freely available open source software applications was conducted to aid the implementation and deployment of the e-journal website.
Talks by Fernando Camutari MSc/EngTech/CEng/MBCS/MIEEE
The innovation of software systems through the engineering of loosely coupled design and architec... more The innovation of software systems through the engineering of loosely coupled design and architecture of new frameworks is now a fundamental activity within software engineering. This is shown by the rapid development of new imperative capabilities, which is constantly changing. The changing in the socioeconomic sphere such as (what people perceive, believe, expect, want and earn) has been impacted by such innovative shift within such industry. In addition, the different types of innovation paradigms, which is being used to continue with industry innovation of various areas like within the automotive and pharmaceutical industry. This informative article aims to inform, present and a new dimension into the way innovation is looked at and explored superficially within the realm of Software systems engineering.
Real Time Database Systems, 2020
As it has been noted, that the data of the internet of things is twenty times the planet earth. A... more As it has been noted, that the data of the internet of things is twenty times the planet earth. And the latest calculation infers that such amount will increase more than the present quantity. Data is one of the ingredients used to provide information (as a finished product). However, there are specific steps taken to convert such finished product into an understanble inofrmation product. Such as acquisition, storage, manipulation, retrieval and distribution of the finished product. Within such a process, exists specific challenges, they are: how it is captured, stored, searched, transfered, analysed, and lastly visualised or made available to the end user.This trajectory, has a trend specific with larger data sets, derived from the analysis of data. The end aim of this research is to provide readers within specialised spheres, and motivate them to gain a greater appreciation of the various types of database systems, it's capabilities as well as the advances taken place to date in regards to the problem of big data.
Uploads
Papers by Fernando Camutari MSc/EngTech/CEng/MBCS/MIEEE
release. But more and more companies are looking into upgrading, as it has a list of
great new features. Schema changes can be performed with less downtime, with more
online configuration options. Multi-source and parallel replication improvements make
replication more flexible and scalable. Native support for JSON data type allows for
storage, search and manipulation of schema-less data. In addition, the granularity of the
scalability is more concise. An upgrade, however, is a complex process - no matter
which major MySQL version you are upgrading to. Upgrading to a new major version
involves risk, and it is important to plan the whole process carefully. In this document,
we’ll look at the important new changes in 5.7.
release. But more and more companies are looking into upgrading, as it has a list of
great new features. Schema changes can be performed with less downtime, with more
online configuration options. Multi-source and parallel replication improvements make
replication more flexible and scalable. Native support for JSON data type allows for
storage, search and manipulation of schema-less data. In addition, the granularity of the
scalability is more concise. An upgrade, however, is a complex process - no matter
which major MySQL version you are upgrading to. Upgrading to a new major version
involves risk, and it is important to plan the whole process carefully. In this document,
we’ll look at the important new changes in 5.7.
However, it is imperative that one is comfortable and well adept with the testing lifecycle, which in its entirety cannot slow the absence of defects: it can only show that software defects are present.
Results: We have designed Darwin-BioWeb which is a web-based portal for biological data analysis. Its main architecture consists of four main components: the Workow Manager, the Taverna Server, the R Server and the Graphical User Interface. The Workow Manager provides services for workow selection, monitoring the current status of a workow and fetching the results using two portlets. The R server is used to perform back-end computation using Bioconductor. The Taverna Server executes submitted workows remotely. The User Interface allows the selection and execution of workows as well as the uploading of input data. Results can be viewed or saved explicitly on their computers. Users are able to perform microarray analysis through a prede_ned workow which includes methods for normalisation, di_erential expression, clustering and di_erential expression.
Conclusions: We have implemented a portal for biological data analysis which is called Darwin BioWeb. The system supports several methods such as normalisation, di_erential expression, clustering and visualisation through a pipeline. These methods allow biologists to simulate speci_c experiments without prior knowledge of informatics and statistics.
such as Taverna and additional sophisticated tools like R and Bioconductor packages for bio-logical data analysis. The concept of scientic work ows is solving the way specic in `silico'
experiments are conducted due to the sophistication to simulate various types of manipulation and data analysis. The present computational analysis of various complex biological data through analytic methods, and their manipulation has become time consuming, and prone to introduce analytical errors due to advances in genomic and post-genomic technologies. To address this problem, an automatic generation of a web portal to assimilate, process the adequate data and display through a web interface eciently and accurately is required.
Thesis Chapters by Fernando Camutari MSc/EngTech/CEng/MBCS/MIEEE
Based on the policy of the Department of computer Science at the University of Sheffield, various analyses related to freely available open source software applications was conducted to aid the implementation and deployment of the e-journal website.
Talks by Fernando Camutari MSc/EngTech/CEng/MBCS/MIEEE
release. But more and more companies are looking into upgrading, as it has a list of
great new features. Schema changes can be performed with less downtime, with more
online configuration options. Multi-source and parallel replication improvements make
replication more flexible and scalable. Native support for JSON data type allows for
storage, search and manipulation of schema-less data. In addition, the granularity of the
scalability is more concise. An upgrade, however, is a complex process - no matter
which major MySQL version you are upgrading to. Upgrading to a new major version
involves risk, and it is important to plan the whole process carefully. In this document,
we’ll look at the important new changes in 5.7.
release. But more and more companies are looking into upgrading, as it has a list of
great new features. Schema changes can be performed with less downtime, with more
online configuration options. Multi-source and parallel replication improvements make
replication more flexible and scalable. Native support for JSON data type allows for
storage, search and manipulation of schema-less data. In addition, the granularity of the
scalability is more concise. An upgrade, however, is a complex process - no matter
which major MySQL version you are upgrading to. Upgrading to a new major version
involves risk, and it is important to plan the whole process carefully. In this document,
we’ll look at the important new changes in 5.7.
However, it is imperative that one is comfortable and well adept with the testing lifecycle, which in its entirety cannot slow the absence of defects: it can only show that software defects are present.
Results: We have designed Darwin-BioWeb which is a web-based portal for biological data analysis. Its main architecture consists of four main components: the Workow Manager, the Taverna Server, the R Server and the Graphical User Interface. The Workow Manager provides services for workow selection, monitoring the current status of a workow and fetching the results using two portlets. The R server is used to perform back-end computation using Bioconductor. The Taverna Server executes submitted workows remotely. The User Interface allows the selection and execution of workows as well as the uploading of input data. Results can be viewed or saved explicitly on their computers. Users are able to perform microarray analysis through a prede_ned workow which includes methods for normalisation, di_erential expression, clustering and di_erential expression.
Conclusions: We have implemented a portal for biological data analysis which is called Darwin BioWeb. The system supports several methods such as normalisation, di_erential expression, clustering and visualisation through a pipeline. These methods allow biologists to simulate speci_c experiments without prior knowledge of informatics and statistics.
such as Taverna and additional sophisticated tools like R and Bioconductor packages for bio-logical data analysis. The concept of scientic work ows is solving the way specic in `silico'
experiments are conducted due to the sophistication to simulate various types of manipulation and data analysis. The present computational analysis of various complex biological data through analytic methods, and their manipulation has become time consuming, and prone to introduce analytical errors due to advances in genomic and post-genomic technologies. To address this problem, an automatic generation of a web portal to assimilate, process the adequate data and display through a web interface eciently and accurately is required.
Based on the policy of the Department of computer Science at the University of Sheffield, various analyses related to freely available open source software applications was conducted to aid the implementation and deployment of the e-journal website.