ICTPRG554 Student Assessment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 175

ASSESSMENT COVER PAGE

o STUDENT DETAILS / DECLARATION:

Course Name:

Unit / Subject Name: BSBINS603 - Initiate and lead applied research


Trainer’s Name: Assessment No: Task 1, Task 2 & Task 3

o I fully understand the context and purpose of this assessment.


o I am fully aware of the competency standard/criteria against which I will be assessed.
o I have been given fair notice of the date, time and venue for the assessment.
I declare that:
o I am aware of the resources I need and how the assessment will be conducted.
o I have had the appeals process and confidentiality explained to me.
o I agree that I am ready to be assessed and that all written work is my own.

This assessment is my:

o First submission o Re-submission (Attempt )


Student Name: Student ID:
Student’s Signature: Submission Date: / /

ASSESSOR USE ONLY:


Assessment Task 1: o Satisfactory o Not Satisfactory
Result:
Assessment Task 2: o Satisfactory o Not Satisfactory

Assessment Task 3: o Satisfactory o Not Satisfactory

Final Assessment Result for this unit C / NYC

Feedback: Feedback is given to the student on each Yes / No


Assessment task & final outcome of the unit

Assessor’s
Feedback:

Assessor’s Date: / /
Signature:

ASSESSMENT FIRST SUBMISSION/RE-SUBMISSION RECEIPT:


It is student’s responsibility to keep the assessment submission receipt as a proof of submission of assessment tasks
Student Name: Student ID:

Unit / Subject Code: Assessment No:

Trainer Name: Date: / /

Signature:

ICTPRG533- Manage data persistence using Page 1 of 175


NoSQL data stores
V01.2022
TYPES OF EVIDENCE
The RTO ensures that assessment is carried out in accordance with the requirements of the unit and the standards
and will implement an assessment process which identifies the evidence required for each unit of competency. They
will identify the type of evidence and the assessment methods used.
Types of evidence include:
 Direct Evidence – things that the assessor, observes first-hand, e.g., observation or work samples

 Indirect Evidence – things that someone else has observed and reported to us, e.g., third party reports

 Supplementary Evidence – other things that can indicate performance, such as training records, questions,
written work, portfolios

Assessment methods may include but are not limited to:


 Written Activity

 Case Study

 Observation/Demonstration

 Practical Activity

 Questions

 Third Party Report

Assessment must comply with the assessment methods of the training package and be conducted in accordance with
the Principles of Assessment and assessment conditions. This means the assessment must be fair, flexible, reliable
and valid.

ICTPRG533- Manage data persistence using Page 2 of 175


NoSQL data stores
V01.2022
ASSESSMENT INFORMATION FOR
STUDENTS
Throughout your training we are committed to your learning by providing a training and assessment framework that
ensures the knowledge gained through training is translated into practical on the job improvements.

You are going to be assessed for:


 Your skills and knowledge using written and observation activities that apply to the workplace.

 Your ability to apply your learning.

 Your ability to recognise common principles and actively use these on the job.

All of your assessment and training is provided as a positive learning tool. Your assessor will guide your learning and
provide feedback on your responses to the assessment materials until you have been deemed competent in this unit.
1.1 HOW YOU WILL BE ASSESSED
The process we follow is known as competency-based assessment. This means that evidence of your current skills
and knowledge will be measured against national standards of best practice, not against the learning you have
undertaken either recently or in the past. Some of the assessment will be concerned with how you apply your skills
and knowledge in the workplace, and some in the training room as required by each unit.
The assessment tasks have been designed to enable you to demonstrate the required skills and knowledge and
produce the critical evidence to successfully demonstrate competency at the required standard.
Your assessor will ensure that you are ready for assessment and will explain the assessment process. Your
assessment tasks will outline the evidence to be collected and how it will be collected, for example; a written activity,
case study, or demonstration and observation.
The assessor will also have determined if you have any special needs to be considered during assessment. Changes
can be made to the way assessment is undertaken to account for special needs and this is called making Reasonable
Adjustment.

What happens if your result is ‘Not Yet Competent’ for one or more assessment tasks?
Our assessment process is designed to answer the question “has the desired learning outcome been achieved yet?” If
the answer is “Not yet”, then we work with you to see how we can get there.
In the case that one or more of your assessments has been marked ‘NYC’, your trainer will provide you with the
necessary feedback and guidance, in order for you to resubmit your responses.

ICTPRG533- Manage data persistence using Page 3 of 175


NoSQL data stores
V01.2022
What if you disagree on the assessment outcome?
You can appeal against a decision made in regards to your assessment. An appeal should only be made if you have
been assessed as ‘Not Yet Competent’ against a specific unit and you feel you have sufficient grounds to believe that
you are entitled to be assessed as competent. You must be able to adequately demonstrate that you have the skills
and experience to be able to meet the requirements of units you are appealing the assessment of.
Your trainer will outline the appeals process, which is available to the student. You can request a form to make an
appeal and submit it to your trainer, the course coordinator, or the administration officer. The RTO will examine the
appeal and you will be advised of the outcome within 14 days. Any additional information you wish to provide may
be attached to the appeal form.

What if I believe I am already competent before training?


If you believe you already have the knowledge and skills to be able to demonstrate competence in this unit, speak
with your trainer, as you may be able to apply for Recognition of Prior Learning (RPL).

Assessor Responsibilities
Assessors need to be aware of their responsibilities and carry them out appropriately. To do this they need to:
 Ensure that participants are assessed fairly based on the outcome of the language, literacy and numeracy
review completed at enrolment.

 Ensure that all documentation is signed by the student, trainer, workplace supervisor and assessor when
units and certificates are complete, to ensure that there is no follow-up required from an administration
perspective.

 Ensure that their own qualifications are current.

 When required, request the manager or supervisor to determine that the student is ‘satisfactorily’
demonstrating the requirements for each unit. ‘Satisfactorily’ means consistently meeting the standard
expected from an experienced operator.

 When required, ensure supervisors and students sign off on third party assessment forms or third party
report.

 Follow the recommendations from moderation and validation meetings.

How should I format my assessments?


Your assessments should be typed in a 11 or 12 size font for ease of reading. You must include a footer on each page
with the student name, unit code and date. Your assessment needs to be submitted as a hardcopy or electronic copy
as requested by your trainer.

ICTPRG533- Manage data persistence using Page 4 of 175


NoSQL data stores
V01.2022
How long should my answers be?
The length of your answers will be guided by the description in each assessment, for example:
Type of Answer Answer Guidelines

Short Answer 4 typed lines = 50 words, or


5 lines of handwritten text
Long Answer 8 typed lines = 100 words, or
1
10 lines of handwritten text = of a foolscap page
3
Brief Report 500 words = 1 page typed report, or
1
50 lines of handwritten text = 1 foolscap handwritten pages
2
Mid Report 1,000 words = 2 page typed report
100 lines of handwritten text = 3 foolscap handwritten pages
Long Report 2,000 words = 4 page typed report
200 lines of handwritten text = 6 foolscap handwritten pages

How should I reference the sources of information I use in my assessments?


Include a reference list at the end of your work on a separate page. You should reference the sources you have used
in your assessments in the Harvard Style. For example:
Website Name – Page or Document Name, Retrieved insert the date. Webpage link.

For a book: Author surname, author initial Year of publication, Title of book, Publisher, City, State

ICTPRG533- Manage data persistence using Page 5 of 175


NoSQL data stores
V01.2022
Assessment guide
The following table shows you how to achieve a satisfactory result against the criteria for each type of assessment
task. The following is a list of general assessment methods that can be used in assessing a unit of competency. Check
your assessment tasks to identify the ones used in this unit of competency.

ICTPRG533- Manage data persistence using Page 6 of 175


NoSQL data stores
V01.2022
Assessment Method Satisfactory Result Non-Satisfactory Result

You will receive an overall result of Competent or Not Yet Competent for the unit. The assessment process is
made up of a number of assessment methods. You are required to achieve a satisfactory result in each of these
to be deemed competent overall. Your assessment may include the following assessment types.
Questions All questions answered correctly Incorrect answers for one or more
questions
Answers address the question in full; Answers do not address the
referring to appropriate sources question in full. Does not refer to
from your workbook and/or appropriate or correct sources.
workplace
Third Party Report Supervisor or manager observes Could not demonstrate consistency.
work performance and confirms that Could not demonstrate the ability to
you consistently meet the standards achieve the required standard
expected from an experienced
operator
Written Activity The assessor will mark the activity Does not follow
against the detailed guidelines/instructions
guidelines/instructions
Attachments if requested are Requested supplementary items are
attached not attached
All requirements of the written Response does not address the
activity are addressed/covered. requirements in full; is missing a
response for one or more areas.

Responses must refer to appropriate One or more of the requirements


sources from your workbook and/or are answered incorrectly.
workplace Does not refer to or tilize
appropriate or correct sources of
information
Observation/Demonstration All elements, criteria, knowledge and Could not demonstrate elements,
performance evidence and critical criteria, knowledge and performance
aspects of evidence, are evidence and/or critical aspects of
demonstrated at the appropriate evidence, at the appropriate AQF
AQF level level
Case Study All comprehension questions Lack of demonstrated
answered correctly; demonstrating comprehension of the underpinning
an application of knowledge of the knowledge (remove) required to
topic case study. complete the case study questions
correctly. One or more questions
are answered incorrectly.
Answers address the question in full; Answers do not address the
referring to appropriate sources question in full; do not refer to
from your workbook and/or appropriate sources.
workplace
Practical Activity All tasks in the practical activity must Tasks have not been completed
be competed and evidence of effectively and evidence of
completion must be provided to your completion has not been provided.
trainer/assessor.
All tasks have been completed
accurately and evidence provided for
each stated task.
Attachments
ICTPRG533- Manage data persistence if requested are
using Requested supplementary
Page items are
7 of 175
NoSQL data stores attached not attached
V01.2022
1. Assessment Plan
The student must be assessed as satisfactory in each of the following assessment methods in order
to demonstrate competence in a variety of ways.
Evidence number/ Assessment method/ Type of evidence/ Sufficient evidence
Task number Task name recorded/Outcome
Assessment task 1 Knowledge Test (KT) S / NS (First Attempt)
S / NS (Second Attempt)
Assessment task 2 Skills Test (ST) S / NS (First Attempt)
S / NS (Second Attempt)
Outcome C NYC Date assessed: Trainer signature:

2. Completion of the Assessment Plan

Your trainer is required to fill out the Assessment Plan Outcome records above, when:

• You have completed and submitted all the requirements for the assessment tasks for this
cluster or unit of competency.
• Your work has been reviewed and assessed by your trainer/assessor.
• You have been assessed as either satisfactory or unsatisfactory for each assessment task
within the unit of competency.
• You have been provided with relevant and detailed feedback.

Every assessment has a “Feedback to Student” section used to record the following information. Your
trainer/assessor must also ensure that all sections are filled in appropriately, such as:

• Result of Assessment (satisfactory or unsatisfactory)


• Student name, signature and date
• Assessor name, signature and date
• Relevant and detailed feedback

3. Unit Requirements
You, the student, must read and understand all of the information in the Unit Requirements before
completing the Student Pack. If you have any questions regarding the information, see your
trainer/assessor for further information and clarification.

ICTPRG533- Manage data persistence using Page 8 of 175


NoSQL data stores
V01.2022
Pre-Assessment Checklist: Task 1 - Knowledge Test
The purpose of this checklist
The pre-assessment checklist helps students determine if they are ready for assessment. The
trainer/assessor must review the checklist with the student before the student attempts the
assessment task. If any items of the checklist are incomplete or not clear to the student, the
trainer/assessor must provide relevant information to the student to ensure they understand the
requirements of the assessment task. The student must ensure they are ready for the assessment
task before undertaking it.
Section 1: Information for Students
Make sure you have completed the necessary prior learning before attempting this assessment.
Make sure your trainer/assessor clearly explained the assessment process and tasks to be
completed.
Make sure you understand what evidence is required to be collected and how.
Make sure you know your rights and the Complaints and Appeal process.
Make sure you discuss any special needs or reasonable adjustments to be considered during the
assessment (refer to the Reasonable Adjustments Strategy Matrix - Appendix A and negotiate
these with your trainer/assessor).
Make sure that you have access to a computer and the internet (if you prefer to type the
answers).
Make sure that you have all the required resources needed to complete this assessment task.
The due date of this assessment task is in accordance with your timetable.
In exceptional (compelling and compassionate) circumstances, an extension to submit an
assessment can be granted by the trainer/assessor. Evidence of the compelling and
compassionate circumstances must be provided together with your request for an extension to
submit your assessment work.
The request for an extension to submit your assessment work must be made before the due
date.
Section 2: Reasonable adjustments
I confirm that I have reviewed the Reasonable Adjustments guidelines and criteria as
provided in Appendix A and attached relevant evidence as required and select the correct
checkbox.
I do require reasonable adjustment
I do not require reasonable adjustment
Declaration (Student to complete)
I confirm that the purpose and procedure of this assessment task has been clearly explained to
me.
I confirm that I have been consulted about any special needs I might have in relation to the
assessment process.
I confirm that the criteria used for this assessment has been discussed with me, as have the
consequences and possible outcomes of this assessment.
I confirm I have accessed and understand the assessment information as provided in the
Training Organisation’s Student Handbook.
I confirm I have been given fair notice of the date, time, venue and/or other arrangements for
this assessment.
I confirm that I am ready for assessment.

ICTPRG533- Manage data persistence using Page 9 of 175


NoSQL data stores
V01.2022
Student Name: ______________________________________

Student Signature: ___________________________________

Assessment method-based instructions and guidelines: Knowledge Test


Assessment type

 Written Questions

Instructions provided to the student:

Assessment task description:

 This is the first (1) assessment task you must successfully complete to be deemed
competent in this unit of competency.
 The Knowledge Test is comprised of twelve (12) written questions
 You must respond to all questions and submit them to your Trainer/Assessor.
 You must answer all questions to the required level, e.g. provide an answer within the
required word limit, to be deemed satisfactory in this task
 You will receive your feedback within two (2) weeks, and you will be notified by your
Trainer/Assessor when your results are available.

Applicable conditions:

 All knowledge tests are untimed and are conducted as open book assessment (this means
you can refer to your textbook during the test).
 You must read and respond to all questions.
 You may handwrite/use a computer to answer the questions.
 You must complete the task independently.
 No marks or grades are allocated for this assessment task. The outcome of the task will be
Satisfactory or Not Satisfactory.
 As you complete this assessment task, you are predominately demonstrating your written
skills and knowledge to your trainer/assessor.

Resubmissions and reattempts:

 Where a student’s answers are deemed not satisfactory after the first attempt, a
resubmission attempt will be allowed.
 The student may speak to their trainer/assessor if they have any difficulty in completing
this task and require reasonable adjustments.
 For more information, please refer to the Training Organisation’s Student Handbook.

Location:

 This assessment task may be completed in:

ICTPRG533- Manage data persistence using Page 10 of 175


NoSQL data stores
V01.2022
✘ a classroom
learning management system (i.e. Moodle),
workplace,
or an independent learning environment.

 Your trainer/assessor will provide you with further information regarding the location for
completing this assessment task.

Instructions for answering the written questions:

 Complete a written assessment consisting of a series of questions.


 You will be required to answer all the questions correctly.
 Do not start answering questions without understanding what is required. Read the
questions carefully and critically analyse them for a few seconds; this will help you to
identify what information is needed in the answer.
 Your answers must demonstrate an understanding and application of the relevant concepts
and critical thinking.
 Be concise, to the point and write answers within the word-limit given to each question. Do
not provide irrelevant information. Remember, quantity is not quality.
 You must write your responses in your own words.
 Use non-discriminatory language. The language used should not devalue, demean, or
exclude individuals or groups based on attributes such as gender, disability, culture, race,
religion, sexual preference or age. Gender-inclusive language should be used.
 When you quote, paraphrase, summarise or copy information from other sources to write
your answers or research your work, always acknowledge the source.

Purpose of the assessment

This assessment task is designed to evaluate student’s knowledge essential to Manage data
persistence using noSQL data stores of contexts and industry settings and knowledge regarding
the following:

 Knowledge of benefits and functions of noSQL database and schema free data persistence,
as well as traditional relational data models
 Knowledge of methods and different features and functions between scaling out and scaling
up (horizontal and vertical)
 Knowledge of language used in required programming language for noSQL applications
 Knowledge of partitioning in a noSQL environment and its related terms
 Knowledge of functions and features for time-to-live (TTL) requirements
 Knowledge of authorisation and authentications procedures and levels of responsibility
according to client access requirements
 Knowledge of distribution of data storage across partitions
 Knowledge of debugging and testing methodologies and techniques
 Knowledge of functions and features of sort keys in noSQL storage
 Knowledge of features of transport encryptions, authentication and authorisation
 Knowledge of different noSQL data store formats, including:
o key value
o document based
o column based
o graph based

ICTPRG533- Manage data persistence using Page 11 of 175


NoSQL data stores
V01.2022
 Knowledge of different noSQL data types, including:
o numeric
o string
o boolean
o complex
o date time.

Task instructions
 This is an individual assessment.
 To ensure your responses are satisfactory, consult a range of learning resources and other
information such as handouts, textbooks, learner resources etc.
 To be assessed as Satisfactory in this assessment task, all questions must be answered
correctly.

ICTPRG533- Manage data persistence using Page 12 of 175


NoSQL data stores
V01.2022
Assessment Task 1: Knowledge Test

Provide your response to each question in the box below.

Q1: Answer the following questions regarding benefits and functions of Satisfactory
NoSQL database and schema-free data persistence, as well as response
traditional relational data models:
Yes No
1.1. Explain the functions and benefits of the NoSQL database. Write
your answer in 200-250 words.

1.2. Explain the benefits and functions of schema-free data


persistence. Write your answer in 200-250 words.

1.3 Explain the benefits and functions of traditional relational data


models. Write your answer in 200-250 words.

Ans 1.1: NoSQL databases, or "Not Only SQL" databases, are a category of database management systems that diverge
from traditional relational databases in terms of data storage and retrieval. The functions and benefits of NoSQL
databases can be elucidated as follows:

1. **Scalability**: NoSQLdatabases are designedtohandlelargevolumes ofunstructured or semi-structured data,


making them highly scalable. They excel in distributed architectures, allowing for easy horizontal scaling by
adding more servers to the database cluster.

2. **Flexibility and Schema-less Design**: Unlike traditional relational databases with a predefined schema,
NoSQL databases embrace a flexible, schema-less approach. This flexibility enables developers to adapt to
evolving data requirements without the need for significant alterations to the database structure.

3. **High Performance**: NoSQL databases are often optimized for specific use cases,
leadingtoenhancedperformanceincertainscenarios.Theycanefficientlyhandletaskslike real-time
analytics, content management, and other applications that demand rapid data access.

4. **Support for Unstructured Data**: NoSQL databases are adept at managing unstructured or
semi-structured data, such as JSON or XML documents. This makes them suitable for
applications dealing with diverse data types like social media posts, user- generated
content, and sensor data.

5. **Ease of Development**: NoSQL databases are generally more developer-friendly,


offeringeasierintegrationwithmodernprogramminglanguagesandframeworks.Theirlack of a
ICTPRG533- Manage data persistence using Page 13 of 175
NoSQL data stores
V01.2022
rigid schema simplifies the development process and accelerates iteration cycles.

6. **Cost-Efficiency**: NoSQLdatabasescanbemorecost-effectiveforcertainapplications,
particularly those with massive amounts of data and dynamic scaling requirements. Their
ability to run on commodity hardware and distributed environments contributes to cost
savings.

Inconclusion,NoSQLdatabasesprovidearangeofbenefits,includingscalability,flexibility,
highperformance,supportforunstructureddata,easeofdevelopment,andcost-efficiency. These features make them
well-suited for modern applications with diverse and evolving data needs.

References:

- Stonebraker, M., & Cattell, R. (2011). 10 Rules for Scalable Performance in Simple Operation NoSQL Data Stores.
ACM SIGMOD Record, 40(2), 10–18. doi: 10.1145/1978915.1978919.

- Sadalage,P.J.,&Fowler,M.(2012).NoSQLDistilled:ABriefGuidetotheEmerging World of Polyglot Persistence.


Addison-Wesley.

Ans 1.2: Schema-free data persistence, a characteristic often associated with NoSQL databases, provides several
benefits and functions that cater to the needs of modern, dynamic applications.

1. **Flexibility and Agility**: One of the primary advantages of schema-free data


persistenceistheflexibilityitoffersinhandlingdiverseandevolvingdatastructures.Unlike
traditionalrelationaldatabaseswitharigidschema,schema-freedatabasesallowdevelopers to insert, modify,
or delete fields without requiring predefined structures. This agility is crucial for applications with rapidly
changing data requirements

2. **Easy Data Evolution**: Schema-free data persistence enables seamless evolution of


datamodelsovertime.Asapplicationrequirementschangeornewfeaturesareintroduced, developers can
adapt the data model without the need for complex migration processes or downtime. This is

ICTPRG533- Manage data persistence using Page 14 of 175


NoSQL data stores
V01.2022
particularly advantageous in dynamic and iterative development environments.

3. **SimplifiedDevelopmentProcess**:Withschema-freedatapersistence,developerscan focus more on


application logic and less on database schema design. This simplification
acceleratesthedevelopmentprocess,allowingforquickeriterationsandeasiercollaboration between
development and operations teams.

4. **Improved Performance**: The absence of a fixed schema can contribute to improved query
performance. Schema-free databases often store data in a way that is optimized for
retrieval,reducingtheneedforcomplexjoinsandenablingfasterdataaccess,especiallyin scenarios where
the data structure is hierarchical or nested.

5. **Support for Unstructured Data**: Schema-free databases excel in managing unstructured or


semi-structured data, such as JSON or XML documents. This makes them suitable for applications
dealing with diverse data types like social media content, sensor data, or other forms of user-
generated content.

In conclusion, schema-free data persistence provides benefits such as flexibility, easy data evolution, simplified
development, improved performance, and support for unstructured data. These advantages make it an ideal
choice for applications where adaptability, speed, and efficient handling of diverse data are paramount.

Reference:

-Sadalage,P.J.,&Fowler,M.(2012).NoSQLDistilled:ABriefGuidetotheEmerging World of Polyglot Persistence. Addison-


Wesley.

Ans 1.3:Traditional relational data models offer several benefits and functions that have made them foundational in
database management systems (RDBMS). These include:

1. **DataIntegrityandConsistency**:Relationaldatabasesenforcedataintegritythrough constraints like primary


keys and foreign keys, ensuring the consistency of data. This is
essentialformaintainingaccurateandreliableinformationwithinthedatabase(Date,2004).

2. **ACID Properties for Transactions**: The ACID properties (Atomicity, Consistency,


ICTPRG533- Manage data persistence using Page 15 of 175
NoSQL data stores
V01.2022
Isolation,Durability)providearobustframeworkfortransactionmanagement,ensuringthat database transactions are
reliable and maintain the integrity of the data (Gray & Reuter, 1993).

3. **Structured Query Language (SQL)**: The standardized query language SQL is a powerful tool for
interacting with relational databases. It allows for efficient data manipulation, retrieval, and management,
offering a common interface for users (Date, 2011).

4. **Complex Query Support and Joins**: Relational databases excel in handling complex queries and joining
tables, facilitating sophisticated data analysis and reporting (Codd, 1970).

5. **NormalizationforEfficiency**:Normalizationtechniquesareemployedtoorganizedata efficiently, minimizing


redundancy and optimizing storage structures (Date, 2004).

6. **MatureEcosystemandTooling**:Therelationaldatabaseecosystemhasmaturedover decades, providing a


wide array of tools and frameworks for tasks like data modeling, backup, recovery, security, and monitoring
(O'Brien & Marakas, 2010).

7. **Data Security and Access Control**: RDBMS offer robust security features, including access control
mechanisms that allow administrators to define user roles and permissions, ensuring data security (Ramakrishnan
& Gehrke, 2003).

These benefits make traditional relational data models well-suited for applications wheredata consistency,
integrity, and relational structure are critical.

References:

- Codd,E.F.(1970).ARelationalModelofDataforLargeSharedDataBanks. Communications of the ACM, 13(6),


377–387.

- Date,C.J.(2004).AnIntroductiontoDatabaseSystems.Addison-Wesley.

- Date, C. J. (2011). SQL and Relational Theory: How to Write Accurate SQL Code. O'Reilly Media.

- Gray, J., & Reuter, A. (1993). Transaction Processing: Concepts and Techniques. Morgan Kaufmann.

- O'Brien,J.A.,&Marakas,G.M.(2010).ManagementInformationSystems.McGraw- Hill/Irwin.

Ramakrishnan,R.,&Gehrke,J.(2003).DatabaseManagementSystems.McGraw- Hill.

ICTPRG533- Manage data persistence using Page 16 of 175


NoSQL data stores
V01.2022
Q2: Answer the following questions regarding methods and different Satisfactory
features and functions between scaling out and scaling up response
(horizontal and vertical):
Yes No
2.1. Explain methods and different features and functions between scaling
out and scaling up (horizontal and vertical). Write your answer in 300-350
words.

Ans2.1: Scalingout (horizontalscaling)andscalingup(verticalscaling)aretwodistinctapproaches to address the increasing


demands on a system's capacity, and they involve different methods, features, and functions.

1. **ScalingOut(HorizontalScaling):**

- **Method:** In horizontal scaling, the focus is on adding more hardware resources, such as servers or nodes, to
distribute the load across multiple machines. This approach is often associated with the use of clusters and distributed
architectures.

- **FeaturesandFunctions:**

- **High Availability:** Horizontal scaling enhances system reliability by distributing workloads across multiple
servers. If one server fails, others can take over, ensuring continuous service availability.

- **Elasticity:**Horizontalscalingallowsfordynamicadjustmentstothenumberofserversbased on traffic patterns. This


elasticity enables efficient resource utilization and cost optimization during varying workloads.

- **Linear Scalability:**Asmoreresourcesareadded,thesystem'scapacityincreaseslinearly.

Thismeansthatperformancescalesproportionallywiththeadditionofeachnewnodeorserver.

- **Distributed Databases:** Horizontal scaling is often associated with NoSQL databases and distributed storage
systems that can easily distribute data across multiple nodes.

2. **ScalingUp(VerticalScaling):**

- **Method:**Verticalscalinginvolvesincreasingthecapacityofasinglemachinebyaddingmore resources such as CPU,


RAM, or storage. This is typically done by upgrading the existing hardware components.

- **FeaturesandFunctions:**

- **Single-System Management:** Managing a single, larger server is generally simpler than dealing with a
distributed system. This can lead to easier maintenance and troubleshooting.

- **Consolidation of Resources:** Vertical scaling is well-suited for applications that require


substantialresourcesonasinglemachine,suchasdatabasesthatbenefitfromhavinglargeamounts of RAM.

- **Quick Implementation:** Upgrading the resources of a single machine can be a quicker solution compared to
adding new servers and configuring a distributed environment.

- **Compatibility:** Some legacy applications may be designed to work more efficiently on a single, powerful
ICTPRG533- Manage data persistence using Page 17 of 175
NoSQL data stores
V01.2022
machine, making vertical scaling a more compatible choice.

References:

- Barroso,L.A.,Clidaras,J.,&Hoelzle,U.(2013).TheDatacenterasaComputer:AnIntroductionto the Design of Warehouse-


Scale Machines. Morgan & Claypool Publishers.

- Tanenbaum,A.S.,&VanSteen,M.(2007).DistributedSystems:PrinciplesandParadigms.Prentice Hall.

Leavitt,N.(2009).WillNoSQLDatabasesLiveUptoTheirPromise?Computer,42(2),12-14. doi:10.1109/MC.2009.51

Q3: Answer the following questions regarding the language used in the Satisfactory
required programming language for NoSQL applications: response

3.1 Describe language used in required programming language for NoSQL Yes No
applications. Write your answer in 200-250 words

Ans3.1:ThelanguageusedinNoSQLdatabasesforprogrammingapplicationsvaries
dependingonthespecificdatabasesystem.NoSQLdatabasesaredesignedtobeflexible,

Accommodating a variety of data models, and hence, they support different programming languages. Some of the common
languages used in NoSQL applications are:

1. **JavaScript (Node.js):** Many NoSQL databases, particularly those in the document-


orientedcategorylikeMongoDB,useJavaScriptastheprimarylanguageforinteractingwith
thedatabase.Node.js,aserver-sideJavaScriptruntime,isoftenemployedtobuildscalable and efficient applications
that communicate with NoSQL databases.

2. **Python:** Python is widely adopted in the NoSQL ecosystem. For instance, MongoDB provides
aPythondriverthatallowsdeveloperstointeractwiththe database usingPython. Python's simplicity and readability
make it a popular choice for working with NoSQL databases, especially in scenarios involving data analysis and
manipulation.

3. **Java:** Java is a versatile programming language commonly used in enterprise


environments,andmanyNoSQLdatabasesofferJavaAPIsordrivers.ApacheCassandra,a wide-
columnstoreNoSQLdatabase,hasJavaasitsprimarylanguageforclientapplications.

4. **Ruby:** Ruby is favored in certain NoSQL ecosystems, such as the use of Ruby on Rails with document-
oriented databases like CouchDB. Ruby's elegant syntax and ease of use make it suitable for web development

ICTPRG533- Manage data persistence using Page 18 of 175


NoSQL data stores
V01.2022
applications.

5. **C# (C-Sharp):** Microsoft's .NET framework and C# are utilized in conjunction with NoSQL databases like
Azure Cosmos DB. C# drivers are available for connecting and interacting with these databases, making it a choice
for developers within the Microsoft ecosystem.

It's important to note that the language used for programming NoSQL applications is often determined by the
specific database and the corresponding drivers or APIs provided by the database vendors.Developerscanchoose
thelanguagethatalignswiththeir expertiseand the requirements of their application.

References:

- MongoDB.(n.d.).MongoDBDrivers.Retrievedfromhttps://docs.mongodb.com/drivers/

- DataStax. (n.d.). Java Driver for Apache Cassandra. Retrieved from


https://docs.datastax.com/en/developer/java-driver/latest/

- Apache CouchDB. (n.d.). Getting Started with Ruby. Retrieved from


https://docs.couchdb.org/en/stable/getting-started/try-ruby.html

Microsoft. (n.d.). Azure Cosmos DB .NET SDK. Retrieved


fromhttps://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-sdk-dotnet

ICTPRG533- Manage data persistence using Page 19 of 175


NoSQL data stores
V01.2022
Q4: Answer the following questions regarding partitioning in a NoSQL Satisfactory
environment and its related terms: response

4.1 Describe the below partitioning strategies in a NoSQL environment and Yes No
its related terms.

• Vertical partitioning

• Horizontal partitioning (“sharding”)

• Functional partitioning

Write your answer in 200-250 words

Ans 4.1: In a NoSQL environment, partitioning strategies are crucial for distributing and managing large datasets
efficiently. Here are descriptions of the mentioned partitioning strategies:

1. **VerticalPartitioning:**

- **Description:** Vertical partitioning involves dividing a dataset based on columns or attributes. Different columns
of a table are stored separately, often on different nodes or servers. This strategy is useful when certain columns are
accessed more frequently than others, allowing for better resource utilization and improved query performance.

- **Related Terms:** Also known as columnar partitioning, this strategy contrasts with horizontal partitioning by
dividing data based on attributes rather than rows.

2. **HorizontalPartitioning("Sharding"):**

- **Description:** Horizontal partitioning, commonly referred to as "sharding," involves


splittingadatasetintosmallerchunksbasedonrows. Eachpartition, orshard,isstoredon a separate node or server. This
strategy is particularly effective for distributing read and write workloads across multiple servers, enhancing scalability
and performance.

**Related Terms:** Sharding is a form of data partitioning where each shard is a self-
containedsubsetofthedata.Ithelpsinparallelizingqueriesandspreadingthedatastorage and processing load.

3. **FunctionalPartitioning:**

- **Description:** Functional partitioning involves organizing data based on the


functionalityoraccesspatternsoftheapplication.Thisstrategyaimstogrouptogetherdata that is frequently accessed together,
optimizing the system for specific use cases. For example,time-
basedfunctionalpartitioningmayinvolvestoringdataforeachmonthoryear separately.

- **RelatedTerms:**Thisconceptiscloselyrelatedtothe ideaoforganizingdatabased on its function or purpose within


the application. It helps in tailoring the data storage and retrieval mechanisms to meet specific application requirements.
ICTPRG533- Manage data persistence using Page 20 of 175
NoSQL data stores
V01.2022
References:

- Paton,N.W.,&Haines,J.H.(1994).VerticalPartitioningAlgorithmsforDatabaseDesign. ACM Transactions on


Database Systems (TODS), 19(4), 651–677. doi:10.1145/191839.191853

Hellerstein, J. M., & Stonebraker, M. (2004). Architecture of a Database System.


FoundationsandTrends®inDatabases,1(2),141–259.doi:10.1561/1900000002

Q5: Answer the following questions regarding functions and features for Satisfactory
time-to-live (TTL) requirements: response

5.1. Explain functions for time-to-live (TTL) requirements. Write your answer Yes No
in 100-150 words.

5.2. Explain features for time-to-live (TTL) requirements. Write your answer
in 100-150 words.

Ans5.1:Time-to-Live(TTL)inadatabasereferstoafeaturethatsetsalimitonthe lifespan or validity period of


data. The primary functions of TTL requirements include:
ICTPRG533- Manage data persistence using Page 21 of 175
NoSQL data stores
V01.2022
1. **Data Expiry:**TTL allowsdatatoautomaticallyexpireandberemovedfrom the database after a specified time
period. This is beneficial for managing temporary or time-sensitive information, such as cache entries, session
data, or event logs.

2. **Resource Management:** By automatically removing outdated or irrelevant


data,TTLhelpsinefficientresourcemanagement.Itpreventstheaccumulationof unnecessary data, optimizing storage
space and system performance.

3. **Cache Invalidation:** In caching systems, TTL is used to invalidate cached data after a certain time,
ensuring that users receive up-to-date information and preventing the serving of stale or obsolete content.

4. **Compliance and Privacy:** For compliance reasons or privacy concerns, TTL can be employed to enforce
data retention policies. It helps organizations adhere to regulations by automatically purging data that is no longer
required.

TTL requirements play a crucial role in maintaining data freshness, optimizing resource utilization, and
ensuring compliance with data management policies.

References:

• Wilson, P., Wilson, J., and Redmond, E. (2012).A guide to modern databases and the
NoSQL movement, Seven Databases in Seven Weeks. The sensible bookcase.

Ans5.2:FeaturesforTime-to-Live(TTL)requirementsindatabasesinclude:

1. **Configurable Expiry:** TTL features allow users to set and configure the
expirydurationforeachpieceofdata,enablingfine-grainedcontroloverhowlong information should be retained in the
database.

2. **Automated Data Deletion:** The primary function is the automatic removal of data once its TTL expires.
This feature helps maintain database hygiene by automatically cleaning up stale or obsolete records, reducing
ICTPRG533- Manage data persistence using Page 22 of 175
NoSQL data stores
V01.2022
storage overhead.

3. **Event-Driven Expiry:** Some systems provide event-driven TTL, allowing data to expire based on specific
events or triggers, ensuring that data removal aligns with the changing requirements of the application.

4. **RenewalorExtension:**Some databasessupport TTLrenewalor extension,


allowinguserstoupdatetheTTLofdataifneeded.Thiscanbeusefulforscenarios where the lifespan of data needs to be
extended dynamically.

5. **Secondary Index Expiry:**In systems withsecondary indexes, TTL features may extend to automatically
removing associated index entries, ensuring comprehensive data cleanup.

Thesefeaturescollectivelyenhancethemanagementofdatalifecycle,storage efficiency, and compliance with


data retention policies.

Reference:
- Apache Cassandra Documentation. (n.d.). Time Window Compaction Strategy.Retrieved from
http://cassandra.apache.org/doc/latest/operating/compaction/twcs.html

Q6: Answer the following questions regarding the authorisation and Satisfactory
authentications procedures and levels of responsibility according to response
client access requirements:
Yes No
6.1. Explain authorisation and authentications procedures according to
client access requirements. Write your answer in 200-220 words.

6.2. Explain the below three (3) levels of responsibility for


authorisation and authentication.

ICTPRG533- Manage data persistence using Page 23 of 175


NoSQL data stores
V01.2022
• Full authorisation Responsibility

• Restricted authorisation Responsibility

• Hidden authorisation Responsibility

Write your answer in 350-400 words.

Ans6.1:AuthorisationandAuthenticationProcedures:**

Authorisation and authentication are critical components of ensuring secure access to client systems.
Authentication verifies the identity of users, ensuring they are who they claim to be. Common methods include
passwords, multi-factor authentication, and biometrics. Authorisation, on the other hand, involves granting or
denying access rights based on authenticatedusers'permissions.Accesscontrollists,role-
basedaccesscontrol,andpolicies are commonly employed.

Client access requirements should dictate the choice of authentication methods to alignwith
securityneeds.Forexample,sensitivesystemsmayrequiremulti-factorauthentication,while less critical systems may
rely on password-based authentication.

Ans6.2:LevelsofResponsibilityforAuthorisationandAuthentication:**

- **FullAuthorisationResponsibility:**

- **Description:** Users with full authorisation responsibility have unrestricted access to the system,
including the ability to modify access controls and grant permissions.

- **Role:**Administratorsandsuperuserstypicallyholdfullauthorisationresponsibility.
**Reference:**
This aligns with the principle of least privilege, where users are granted the minimum level of access necessaryto
perform theirduties (Saltzer, D. J., & D. D. Clark,
1975).
- **RestrictedAuthorisationResponsibility:**

- **Description:** Users with restricted authorisation can manage certain aspects of authorisation, but their
abilities are limited compared to those with full responsibility.

- **Role:** Middle-tier administrators or managers may fall into this category, with the ability to manage
access within specific departments or projects.

- **Reference:** This helps distribute administrative tasks while still maintaining control over critical aspects
of the system (Anderson, 2008).

ICTPRG533- Manage data persistence using Page 24 of 175


NoSQL data stores
V01.2022
- **HiddenAuthorisationResponsibility:**

- **Description:**Userswithhiddenauthorisationresponsibilityhavenodirectcontrolover access management.


Their authorisation is handled transparently by the system based on predefined rules or policies.

- **Role:** Regular users with limited or predefined roles and permissions fall into this category.

- **Reference:** Hidden authorisation responsibility ensures that routine access control is


managedautomatically,reducingtheriskofhumanerrorinaccessmanagement(Gardezi,S. J. D., & Ab Razak, S.,
2014).

Insummary,understandingandimplementingtheselevelsofresponsibilityforauthorisation and authentication


contribute to a secure and well-managed access control system.

**References:**
Saltzer,D.J.,&D.D.Clark.(1975)."PrinciplesofSecureComputerSystems."Report.MIT- LCS-TR-341.
Anderson, R. (2008). "Security Engineering: A Guide to Building Dependable Distributed Systems." Wiley.
Gardezi, S. J. D., & Ab Razak, S. (2014). "Authentication and Authorization Architecture for Cloud Computing."
2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing. doi: 10.1109/UCC.2014.85

Q7: Answer the following questions regarding the distribution of data Satisfactory
storage across partitions: response

Yes No

ICTPRG533- Manage data persistence using Page 25 of 175


NoSQL data stores
V01.2022
7.1. Explain the distribution of data storage across partitions. Write
your answer in 100-150 words.

Ans7.1:**7.1.DistributionofDataStorageAcrossPartitions:**

In the context of database management systems, the distribution of data storage across partitions refersto the practice of
dividing adataset into smallersubsets,each residing on separate storage unitsor servers. This approach, often known as
partitioning, is employed to enhance system performance, scalability, and manageability.

By distributing data across partitions, systems can achieve parallelism in processing, allowing for concurrent read and write
operations on different parts of the dataset. This is especially crucial in distributed databases and parallel computing
environments.

References:

- Hellerstein,J.M.,&Stonebraker,M.(2004)."ArchitectureofaDatabaseSystem." Foundations and Trends® in Databases, 1(2),


141–259. doi:10.1561/1900000002.

- DeCandia,G.,Hastorun,D.,Jampani,M.,Kakulapati,G.,Lakshman,A.,Pilchin,A.,...Vogels,

W. (2007). "Dynamo: Amazon’s Highly Available Key-value Store." ACM SIGOPS Operating Systems Review, 41(6), 205–220.
doi:10.1145/1323293.1294281.

Q8: Answer the following questions regarding the debugging and testing Satisfactory
methodologies and techniques: response

8.1. Explain the debugging and testing methodologies. Write your Yes No
answer in 80-100 words.

8.2. Explain the below debugging and testing techniques.

• Deduction strategy

• Debugging by brute force

ICTPRG533- Manage data persistence using Page 26 of 175


NoSQL data stores
V01.2022
• Debugging by testing

• Backtracking strategy

• Induction strategy

Write your answer in 150-180 words

Ans8.1:**8.1.DebuggingandTestingMethodologies:**

Debuggingistheprocessofidentifyingandfixingerrorsorbugsinsoftwarecode,ensuring its correctness. Testing, on the other


hand, involves systematically evaluating software to find defects and ensure its functionality meets specified
requirements.Both debugging and testing are integralparts of the software development life cycle, ensuring the creation of
robust and reliable software.

Ans8.2:**DebuggingandTestingTechniques:**

1. **DeductionStrategy:**

- **Description:** This technique involves systematically analyzing code, identifying potential sources of errors through
logical reasoning, and deducing the most likely causes of issues.

- **Reference:**

- Börstler, J., Vihavainen, A., Paterson, J., & Adams, E. (2013). "Deductive Reasoning for the Automated Grading of
Prolog Programs." ACM Transactions on Computing Education
(TOCE), 13(4), 16. [DOI:
10.1145/2516760.2516776](https://dl.acm.org/doi/10.1145/2516760.2516776)

2. **DebuggingbyBruteForce:**

**Description:** Involves systematically trying different inputs, configurations, or code modifications to identify the source of
errors. It's an exhaustive approach to find solutions

3. **DebuggingbyTesting:**

- **Description:** This technique involves creating and executing tests to identify and isolate bugs. Testing strategies
include unit testing, integration testing, and system testing.

- **Reference:** Myers, G. J., Sandler, C., & Badgett, T. (2011). "The Art of Software Testing." John Wiley & Sons.

4. **BacktrackingStrategy:**

- **Description:** Involves systematically revisiting and reassessing decisions made during program execution to
identify and correct errors. It's a process of stepping backward to find the root cause of issues.

- **Reference:**

- Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1974). "The Design and Analysis of Computer Algorithms." Addison-Wesley.

ICTPRG533- Manage data persistence using Page 27 of 175


NoSQL data stores
V01.2022
5. **InductionStrategy:**

- **Description:** Involves inferring potential causes of errors based on observed patterns or recurring issues. It relies
on inductive reasoning to generalize from specific instances.

- **Reference:**

- Pan, L., Zhang, Z., & Lu, S. (2011). "FastTrack: Efficient and Precise Dynamic Taint Analysis for Multithreaded
Programs." ACM Transactions on Computer Systems (TOCS), 29(4), 12. [DOI:

- 10.1145/2043676.2043681](https://dl.acm.org/doi/10.1145/2043676.2043681)

These debugging and testing techniques offer various approaches to identify, isolate, and resolve software defects in different
scenarios.

Q9: Answer the following questions regarding the functions and features Satisfactory
of sort keys in NoSQL storage: response

9.1. Explain functions and features of sort keys in NoSQL storage. List Yes No
the ranges supported by the Sort key. Write your answer in 300-350
words.

Ans9.1:**9.1.FunctionsandFeaturesofSortKeysinNoSQLStorage:**

InNoSQLdatabases,sortkeysplayacrucialroleinorganizingandretrievingdataefficiently. Here are the key functions and features:

**Data Organization:** Sort keys are used to organize data within a partition. They determine the order in which items are
stored and provide a mechanism for range queries.

**EfficientQuerying:**Sortkeysenableefficientqueryingforspecificrangesofdata.This is particularly valuable in scenarios where


data needs to be retrieved based on a specific order, such as chronological order or alphabetical order.

**Composite Keys:** Many NoSQL databases support composite keys, which consist of a partition key and a sort key. This
combination allows for more granular organization and
querying,asitemsarefirstgroupedbythepartitionkeyandthensortedwithin thepartition based on the sort key.

**Range Queries:** Sort keys enable the execution of range queries, allowing users to retrieve data within a specified range.
For example, retrieving all records with timestamps between a start and end date.

ICTPRG533- Manage data persistence using Page 28 of 175


NoSQL data stores
V01.2022
**Indexing:** Sort keys are often used in conjunction with indexing mechanisms to
optimizequeryperformance.Indexingallowsforfasterdataretrievalbasedonthespecified sort key criteria.

**Customizable Sorting:** NoSQL databases provide flexibility in choosing the sort key criteria. This can include numeric
values, strings, or even complex data types, providing adaptability to diverse data structures.

**RangesSupportedbySortKeys:**

Therangessupportedbysortkeysvarybasedonthedatatypeanddatabaseimplementation. For numeric values, the range could


cover a continuous interval, while for strings, it might involve
lexicographicalordering.Thespecificrangedetailsdependonthedatabase'sdesign and the chosen sort key attributes.

InAmazonDynamoDB,forinstance,therangeofsortkeyvaluesdependsonthetypeofkey
attribute(string,number,orbinary)anditslength.DynamoDBsupportsbothascendingand descending order for range queries.

References:

Amazon DynamoDB. (n.d.). "Working with Queries." Retrieved from


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithDyn amo.html

Sadalage,P.J.,&Fowler,M.(2012)."NoSQLDistilled:ABriefGuidetotheEmergingWorld of Polyglot Persistence." Addison-Wesley.

ICTPRG533- Manage data persistence using Page 29 of 175


NoSQL data stores
V01.2022
Q10 Answer the following questions regarding the features of transport Satisfactory
: encryptions, authentication and authorisation: response

10.1. Explain features of transport encryptions. Write your answer in Yes No


100-130 words.

10.2. Explain features of authentication. Write your answer in 150-180


words.

10.3. Explain features of authorisation. Write your answer in 100-130


words.

Explain features of transport encryptions. Write your answer in 100-130 words.


Transport encryption is a crucial security measure that protects data during its transmission over networks. One key feature is
data confidentiality, achieved through encryption algorithms that scramble information, rendering it unreadable without the
appropriate decryption key. Another vital aspect is integrity, ensuring that data remains unaltered during transit by detecting
and rejecting unauthorized modifications. Authentication mechanisms verify the identities of communicating parties,
preventing unauthorized access. Additionally, transport encryption supports forward secrecy, generating unique session keys
for each connection, enhancing security. Resistance to known cryptographic attacks and efficient key exchange protocols are
further features, ensuring robust protection against eavesdropping and tampering, making transport encryption an essential
component in safeguarding sensitive information across digital communication channels.

Explainfeaturesofauthentication.Writeyouranswerin150-180words.
Authentication involves verifying the identity of users or systems attempting to access resources. Key features of
authentication include multifactor authentication (MFA), which enhances security by requiring multiple credentials such as
passwords, biometrics, or tokens. Biometric authentication utilizes unique physical or behavioral characteristics like
fingerprints or facial recognition for identity verification. Strong password policies, including complexity requirements and
regular updates, are crucial in thwarting unauthorized access. Single Sign-On (SSO) streamlines user access by allowing them
to log in once to access multiple systems. Time-based access controls restrict user permissions to specific time frames,
reducing the risk of unauthorized usage. Additionally, robust authentication protocols, like OAuth and OpenID, facilitate
secure third-party access without exposing user credentials. These features collectively ensure a layered and resilient
authentication framework, fortifying systems against unauthorized access and safeguarding sensitive information.

Explainfeaturesofauthorisation.Writeyouranswerin100-130words.

Authorization involves granting or denying access to resources based on verified identities. Key features include role-based

ICTPRG533- Manage data persistence using Page 30 of 175


NoSQL data stores
V01.2022
access control (RBAC), assigning permissions according to predefine roles to manage user privileges efficiently. Fine-grained
access control allows for precise specification of permissions at an individual level, enhancing security. Access policies define
rules and conditions for resource accessibility, ensuring compliance with security requirements. Dynamic authorization adapts
permissions in real-time based on changing circumstances. Additionally, audit trails track and log user activities, aiding in post-
incident analysis and compliance enforcement. By combining these features, authorization establishes a structured and secure
framework for managing user access, preventing unauthorized actions, and maintaining data integrity.

Q11 Answer the following questions regarding the different NoSQL data Satisfactory
: store formats: response

11.1. Explain different NoSQL data store formats, including Yes No

• Key-value

• Document-based

• Column based

• Graph-based

Write your answer in 400-450 words.

NoSQL databases are designed to handle diverse and unstructured data, and they employ various data models
to address specific use cases. Here are explanations of different NoSQL data store formats:
Key-Value Retailers:
Key-value stores are the simplest NoSQL databases, associating each piece of data with a unique key. The data is stored as a
collection of key-value pairs, allowing for efficient retrieval and storage. These databases are highly scalable and per formant.
Examples include Redis and Amazon DynamoDB. Key-value stores are suitable for scenarios where data access is primarily
based on a single key.

Document-Oriented Storage:
Document-based databases store data in flexible, JSON-like documents. Each document contains key-value pairs, and
collections of documents can be grouped together. MongoDB is a popular document-based NoSQL database. This format is
beneficial for handling semi-structured or hierarchical data, as documents can have nested structures, arrays, and

ICTPRG533- Manage data persistence using Page 31 of 175


NoSQL data stores
V01.2022
subdocuments. It is well-suited for applications with evolving schemas and complex data structures.

Column-Based Retailers:
Column-family or column-based stores organize data into columns rather than rows. Each column family contains rows with a
unique key and columns associated with data attributes. Apache Cassandra and HBase are examples of column-based
databases. This model is efficient for read-heavy workloads, as it allows for fast data retrieval of specific attributes and enables
efficient storage of large amounts of sparse data.

Graph-Based Retailers:
Graph databases are designed for handling data with complex relationships. They model data as nodes, edges, and properties,
representing entities, connections, and attributes. Neo4j is a widely-used graph database. This format is particularly useful for
applications involving highly interconnected data, such as social networks, fraud detection, or recommendation engines.
Graph databases excel in traversing relationships between entities and querying graph patterns.
Each NoSQL data store format has its strengths and weaknesses, making them suitable for different use cases. Key-value
stores are optimal for high-performance scenarios with simple data access patterns. Document-based stores are versatile and
accommodate dynamic schemas. Column-based stores excel in analytical processing and scalability. Graph-based stores are
powerful for applications with intricate relationships. Organizations often choose NoSQL databases based on their specific
data requirements and performance objectives, embracing the flexibility and scalability offered by these diverse data models.

Q12 Answer the following questions regarding the different NoSQL data Satisfactory
: types response

12.1 Explain different NoSQL data types, including: Yes No

• Numeric/integer

• String

• Boolean

• Date time

Write your answer in 250-300 words.

NoSQL databases support a variety of data types to accommodate the diverse and dynamic nature of unstructured or semi-

ICTPRG533- Manage data persistence using Page 32 of 175


NoSQL data stores
V01.2022
structured data. One common data type in NoSQL databases is the Numeric or Integer data type. Here's an explanation:
Number/Integer:
This data type is used to represent numerical values, both integers and floating-point numbers, in NoSQL databases. Numeric
data types are crucial for storing and manipulating quantitative information, such as counts, measurements, or any data that
involves mathematical operations. Integer data types specifically deal with whole numbers without decimal points.
In a NoSQL database, the Numeric/Integer data type can be used in various contexts, such as:
Aggregates and Counters:
NoSQL databases often handle large-scale data, and numeric types are essential for counting occurrences or aggregating
values. For example, in a document-based database like MongoDB, numeric types can be used to store counts of elements or
calculate aggregations based on integer values.
Dates and Time Stamps:
Many NoSQL databases use numeric types to represent timestamps or dates. These values are often stored as Unix
timestamps, which are integer values representing the number of seconds or milliseconds since a specific epoch. This
simplifies date and time calculations and comparisons.
Particular Identifiers:
Numeric data types are commonly used to store unique identifiers, such as auto-incremented integers or unique serial
numbers. These identifiers are crucial for indexing and retrieving data efficiently.
Range and Sorting Queries:
Numeric data types facilitate sorting and range queries, enabling efficient retrieval of data within a specific numerical range.
This is particularly useful for scenarios where data needs to be ordered or filtered based on numerical attributes.
It's important to note that the exact implementation and support for numeric data types may vary between different NoSQL
databases. Some databases might have specific types for integers and floating-point numbers, while others may have a more
generic numeric type. Understanding the specific capabilities of the NoSQL database in use is essential for effective data
modeling and application development.
String:
The string data type in NoSQL databases is used to represent textual or character-based information. Strings are sequences of
characters, which can include letters, numbers, symbols, or any combination thereof. This data type is fundamental for storing
and querying textual data in various formats. Here's how the string data type is commonly used in NoSQL databases:
Textual Data:
Strings are employed to store textual information such as names, descriptions, addresses, or any other data represented in a
human-readable form. In document-based databases like MongoDB, fields within documents can be defined as strings to store
such information.
Keys and Symbols:
String data types are frequently used to store keys, identifiers, or references. For example, in key-value stores, the key is often
a string that uniquely identifies the associated value. In graph databases, node or edge identifiers are commonly represented
as strings.
Documents in XML and JSON:
NoSQL databases often support the storage of semi-structured or nested data formats like JSON (JavaScript Object Notation)
or XML (eXtensible Markup Language). Strings are used to represent these serialized data structures. Document-based
databases, such as CouchDB or MongoDB, leverage string data types to store and retrieve JSON-like documents.
URIs and URLs:
Strings are suitable for storing Uniform Resource Locators (URLs) or Uniform Resource Identifiers (URIs). This is particularly
relevant in scenarios where data involves links to external resources, web pages, or APIs.
Data in Serial Form:
NoSQL databases may store serialized data in string format. This could include serialized objects or complex data structures
that are converted into strings for storage and later deserialization.
Text That Can Be Searched:
String data types are used in full-text search scenarios, where the database allows for efficient searching and indexing of
textual content. This is common in document-based and search-oriented databases.
Understanding the nature of the data being stored and the requirements of the application is crucial for choosing the
appropriate data type. Strings are versatile and widely used, making them a foundational element in NoSQL databases for
handling a broad range of textual and symbolic information.

ICTPRG533- Manage data persistence using Page 33 of 175


NoSQL data stores
V01.2022
Boolean:
The Boolean data type in NoSQL databases is used to represent binary states, typically denoting true or false values. Booleans
are fundamental for scenarios where data can exist in only one of two possible states, making them efficient for binary
decision-making. Here's how the Boolean data type is commonly used in NoSQL databases:
Binary Decision Retrieval:
Booleans are often employed to store binary decisions or states, such as whether a condition is true or false. For instance, in a
document-based database like MongoDB, a Boolean field may indicate whether a document is published or not.
Querying and Filtering:
NoSQL databases use Boolean values for filtering and querying data based on specific conditions. Queries can be constructed
to retrieve documents or records that satisfy a given Boolean criterion, enhancing the flexibility of data retrieval.
Flags of Status:
Booleans are frequently used as status flags to indicate the status of an entity or process. For example, in a key-value store, a
Boolean value may signify whether a key is currently active or inactive.
Configuration Parameters:
Boolean data types are suitable for storing configuration settings or switches, where a true or false value determines the
activation or deactivation of a particular feature or behavior.
Reasonable circumstances:
Boolean values are used in logical conditions within queries or application logic. This is crucial for controlling flow based on
true or false evaluations, ensuring appropriate actions are taken depending on the state of the data.
Sorting based on Availability:
In scenarios where data represents the availability or presence of a resource, boolean values can be used to quickly filter and
identify available or unavailable items.
Graph-Based Database Connections:
In graph databases, Boolean values may be associated with edges between nodes to represent the presence or absence of a
relationship. This is useful for modeling graph structures where the edges have specific attributes.
Booleans simplify decision-making processes and are a concise way to represent binary states within NoSQL databases. They
enhance the expressiveness of queries and provide a clear and efficient means of storing and retrieving data based on logical
conditions.

Date time data types are employed to represent dates and times in NoSQL databases. They are crucial for scenarios that
require temporal information, such as event timestamps, deadlines, or scheduling. Date time data types can store
information at various levels of precision, including year, month, day, hour, minute, second, and even fractional seconds.
This enables accurate chronological ordering of data and facilitates queries based on time ranges or specific points in time.
Proper handling of date time data is essential for applications involving time-sensitive operations, analytics, or historical
data retrieval.
In summary, these NoSQL data types—string, Boolean, and date time—offer versatility in handling textual information,
binary states, and temporal data, respectively. Their effective utilization contributes to the flexibility and efficiency of
NoSQL databases, enabling them to address a wide range of data storage and retrieval requirements in diverse application
scenarios.

ICTPRG533- Manage data persistence using Page 34 of 175


NoSQL data stores
V01.2022
Assessment Results Sheet
Outcome First attempt:

Outcome (make sure to tick the correct checkbox):

Satisfactory (S) or Not Satisfactory (NS)

Date: _______(day)/ _______(month)/ _______(year)

Feedback:

Second attempt:

Outcome (please make sure to tick the correct checkbox):


Satisfactory (S) or Not Satisfactory (NS)
Date: _______(day)/ _______(month)/ _______(year)
Feedback:

Student  I declare that the answers I have provided are my own work. Where I
Declaration
have accessed information from other sources, I have provided
references and/or links to my sources.
 I have kept a copy of all relevant notes and reference material that I
used as part of my submission.
 I have provided references for all sources where the information is not
my own. I understand the consequences of falsifying documentation and
plagiarism. I understand how the assessment is structured. I accept that
the work I submit may be subject to verification to establish that it is my
own.

ICTPRG533- Manage data persistence using Page 35 of 175


NoSQL data stores
V01.2022
 I understand that if I disagree with the assessment outcome, I can
appeal the assessment process, and either re-submit additional evidence
undertake gap training and or have my submission re-assessed.
 All appeal options have been explained to me.

Student Signature

Date

Trainer/Assessor
Name
Trainer/Assessor I hold:
Declaration
Vocational competencies at least to the level being delivered
Current relevant industry skills
Current knowledge and skills in VET, and undertake
Ongoing professional development in VET

I declare that I have conducted an assessment of this student’s submission.


The assessment tasks were deemed current, sufficient, valid and reliable. I
declare that I have conducted a fair, valid, reliable, and flexible assessment.
I have provided feedback to the student.

Trainer/Assessor
Signature
Date

Office Use Only The outcome of this assessment has been entered into the Student
Management System

on _________________ (insert date)

by (insert Name) __________________________________

ICTPRG533- Manage data persistence using Page 36 of 175


NoSQL data stores
V01.2022
Pre-Assessment Checklist: Task 2 - Skills Test
The purpose of this checklist
The pre-assessment checklist helps students determine if they are ready for assessment. The
trainer/assessor must review the checklist with the student before the student attempts the
assessment task. If any items of the checklist are incomplete or not clear to the student, the
trainer/assessor must provide relevant information to the student to ensure they understand the
requirements of the assessment task. The student must ensure they are ready for the assessment
task before undertaking it.
Section 1: Information for Students
Make sure you have completed the necessary prior learning before attempting this assessment.
Make sure your trainer/assessor clearly explained the assessment process and tasks to be
completed.
Make sure you understand what evidence is required to be collected and how.
Make sure you know your rights and the Complaints and Appeal process.
Make sure you discuss any special needs or reasonable adjustments to be considered during the
assessment (refer to the Reasonable Adjustments Strategy Matrix and negotiate these with your
trainer/assessor).
Make sure that you have access to a computer and the internet (if you prefer to type the
answers).
Make sure that you have all the required resources needed to complete this Assessment Task
(AT).
The due date of this assessment task is in accordance with your timetable.
In exceptional (compelling and compassionate) circumstances, an extension to submit an
assessment can be granted by the trainer/assessor. Evidence of the compelling and
compassionate circumstances must be provided together with your request for an extension to
submit your assessment work.
The request for an extension to submit your assessment work must be made before the due
date.
Section 2: Reasonable adjustments
I confirm that I have reviewed the Reasonable Adjustments guidelines and criteria as
provided in Appendix A and attached relevant evidence as required and select the correct
checkbox.
I do require reasonable adjustment
I do not require reasonable adjustment
Declaration (Student to complete)
I confirm that the purpose and procedures of this assessment task has been clearly explained to
me.
I confirm that I have been consulted about any special needs I might have in relation to the
assessment process.
I confirm that the criteria used for this assessment has been discussed with me, as have the
consequences and possible outcomes of this assessment.
I confirm I have accessed and understand the assessment information as provided in the
Training Organisation’s Student Handbook.
I confirm I have been given fair notice of the date, time, venue and/or other arrangements for
this assessment.

ICTPRG533- Manage data persistence using Page 37 of 175


NoSQL data stores
V01.2022
I confirm that I am ready for assessment.

Student Name: ______________________________________

Student Signature: ___________________________________

Assessment method-based instructions and guidelines: Skills Test


Assessment type

 Skills Test - Manage data persistence using noSQL data stores

Instructions provided to the student:

Assessment task description:

 This is the second (2) assessment task you must successfully complete to be deemed
competent in this unit of competency.
 This assessment task is a Skills Test.
 This assessment task consists of thirty (30) practical demonstration activities.
 You will receive your feedback within two (2) weeks, and you will be notified by your
trainer/assessor when your results are available.
 You must attempt all activities of the project for your trainer/assessor to assess your
competence in this assessment task.

Applicable conditions:

 This skill test is untimed and is conducted as an open book assessment (this means you
are able to refer to your textbook or other learner materials during the test).
 You will be assessed independently on this assessment task.
 No marks or grades are allocated for this assessment task. The outcome of the task will be
Satisfactory or Not Satisfactory.
 As you complete this assessment task, you are predominately demonstrating your skills,
techniques and knowledge to your trainer/assessor.
 Your trainer/assessor may ask you relevant questions during this assessment task

Resubmissions and reattempts:

 Where a student’s answers are deemed not satisfactory after the first attempt, a
resubmission attempt will be allowed.
 The student may speak to their trainer/assessor if they have any difficulty in completing
this task and require reasonable adjustments.
 For more information, please refer to the Training Organisation’s Student Handbook.

ICTPRG533- Manage data persistence using Page 38 of 175


NoSQL data stores
V01.2022
Location:

 This assessment task may be completed in:

✘ a classroom
learning management system (i.e. Moodle),
workplace,
or an independent learning environment.

 Your Trainer/Assessor will provide you with further information regarding the location for
completing this assessment task.

Purpose of the assessment

The purpose of this assessment task is to assess the student’s knowledge and skills essential to
Manage data persistence using noSQL data stores in a range of contexts and industry settings.

 Skill to create at least three different queries, including updating, deleting and creating
data types
 Skill to create at least two indexes.
 Skill to specify partition and sort keys
 Skill to optimise the data.

Task instructions
 This is an individual assessment.
 The task will be completed in your training organisation’s IT lab.
 The trainer/assessor will provide the required resources to the student/trainee.
 According to organisational needs, the student will Manage data persistence using NoSQL
data stores.
 Word-limit for this assessment task is given in the template.
 The trainer/assessor assures simulated environmental conditions for the students/trainee
for Manage data persistence using NoSQL data stores.
 The student must document completed developments.
 The student must use the templates provided to document their responses.
 The student must follow the word limits specified in the templates.
The trainer/assessor must assess the student using the performance checklist provided

Assessment Task 2 - Skills Test

The following forms the basis of the evidence that you need to collect from students for assessment in
this assessment task. The task and specific assessment requirements that are given to students are
also outlined.

ICTPRG533- Manage data persistence using Page 39 of 175


NoSQL data stores
V01.2022
 Refer to all the blue and italic text for a guide to suggested answers and benchmarking for
assessments and also for instructions on how to use the assessment tools.
 Ensure all outlined conditions of assessment requirements are met.
 For each assessment task, an Assessment Result Sheet form for the student is completed. This
is located at the end of each assessment task in the Student Pack
 This Assessment Result Sheet allows the trainer/assessor to record the following items:

o The outcome of the assessment tasks is either Satisfactory (S) or Not Satisfactory (NS).
o Feedback to the student
o The student declaration
o The Trainer/Assessor declaration

 The trainer/assessor and the student must sign the Assessment Result Sheet to show that the
student was provided with the task outcome.
 The Unit Mapping identifies what aspects of the Unit of Competency are being addressed in each
assessment task.
 Once all assessment tasks allocated to this Unit of Competency have been undertaken, the
Student’s Assessment Plan (point 5 in the Student Pack) is completed to record the unit
outcome. The outcome will be either Competent (C) or Not Yet Competent (NYC).
 When all assessment tasks are deemed Satisfactory (S), the unit outcome is Competent (C).
 If at least one assessment task is deemed Not Satisfactory (NS), the unit outcome is Not Yet
Competent (NYC).
 The following Information is attached to each assessment task:

o Assessment type
o Assessment task description
o Applicable conditions
o Re submissions and reattempts
o Location
o Instructions for completion of the assessment task
o How trainers/assessors will assess the work
o Task-specific instructions for the student

Resources required to complete the assessment task:


 Computer
 Internet
 MS Word
 Printer or e-printer
 Team Members
 AWS Account (Cloud computing application) (Free for a Year)
 AWS DynamoDB
 AWS SDK Python
 AWS SDK .NET
 AWS DynamoDB encryption tool

ICTPRG533- Manage data persistence using Page 40 of 175


NoSQL data stores
V01.2022
Assessment task Instructions
 This is an individual assessment.
 The task will be completed in your training organisation’s IT lab.
 The trainer/assessor will provide the required resources to the student/trainee.
 According to organisational needs, the student will Manage data persistence using NoSQL data
stores.
 Word-limit for this assessment task is given in the template.
 The trainer/assessor assures simulated environmental conditions for the students/trainee for
Manage data persistence using NoSQL data stores.
 The student must document completed developments.
 The student must use the templates provided to document their responses.
 The student must follow the word limits specified in the templates.
 The trainer/assessor must assess the student using the performance checklist provided

ICTPRG533- Manage data persistence using Page 41 of 175


NoSQL data stores
V01.2022
Assessor Instructions: Task 2 – Skills Test

This assessment task requires the student:

 To create three different queries, including updating, deleting and creating data types for the
NoSQL data store.
 To create at two indexes.
 To specify partition and sort keys.
 To optimise the data.

To do so, you must complete the following activities:

 Activity 1: Confirm use and application for NoSQL according to business requirements and needs
 Activity 2: Research and compare horizontal and vertical scaling and confirm relevance and
benefit of horizontal scaling according to business requirements
 Activity 3: Research and compare NoSQL technologies and traditional relational data models
 Activity 4: Research, review and select NoSQL vendor technologies according to business
requirements
 Activity 5: Design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements
 Activity 6: Review and select required types of NoSQL data store according to business
requirements
 Activity 7: Create partition key and determine storage place of data items
 Activity 8: Review and determine required partition key and ensure effective distribution of
storage across partition
 Activity 9: Determine and select the required sort key according to business requirements
 Activity 10: Calculate, determine and configure read and write through-puts according to
business requirements
 Activity 11: Determine, configure and create indexes for optimising data retrieval queries
 Activity 12: Determine and create additional indexes
 Activity 13: Optimise data queries and retrievals for indexes according to business requirements
 Activity 14: Determine and configure time-to-live (TTL) on data objects according to business
requirements
 Activity 15: Research and select required API client for interacting with NoSQL data store
according to business requirements
 Activity 16: Substantiate and connect API client to NoSQL data store instance
 Activity 17: Insert single data object into NoSQL datastore using the selected client application
 Activity 18: Insert multiple items in a single operation
 Activity 19: Use query and select single object
 Activity 20: Use query and retrieve multiple objects in batch
 Activity 21: Perform query against the index
 Activity 22: Perform query to select required attributes and project results
 Activity 23: Delete single and multiple objects according to business requirements
 Activity 24: Update single and multiple objects according to business requirements
 Activity 25: Persist objects with different data types
 Activity 26: Configure and confirm change event triggers and notifications according to business
needs
 Activity 27: Test, fix and ensure responses and trigger notifications work according to business
requirements

ICTPRG533- Manage data persistence using Page 42 of 175


NoSQL data stores
V01.2022
 Activity 28: Review and confirm data is encrypted and authorisation and authentications are
active according to user and client access requirements
 Activity 29: Test and fix data persistence process according to business requirements
 Activity 30: Document and finalise work according to business requirements.

Case Study:

Company profile:

Quipmart solutions’ is a top E-commerce company in Australia. Quipmart solutions provide essential
goods and services to its customers and have several branches from where a large amount of data is
collected in their database daily with their main headquarters in Sydney, Australia. Our main objective
is to provide services to handle such large sets of data, i.e., big data in the area of big data, as well as
presenting big data insights. The company place greater emphasis on understanding the customer
need, budget and resources.

The company stores the following data from various services:

 Customer’s information.
 Seller’s information on the website.
 Employee’s information.
 Information about each product added to the website.
 Information about each product sold/bought.
 Warehouse stock data at every branch.

New Project along with Organisational requirements:

Recently, Company has switched its data stores from RDBMS to NoSQL databases. The company has
approached your training organisation to create three different queries, including updating, deleting
and creating data types for the NoSQL data store. You are required to use a Key-value type NoSQL
database using DynamoDB from AWS services.

You are a senior software developer at your training organisation. The management of the organisation
approached you to complete the project. You are required to create three different queries, including
updating, deleting and creating data types for the NoSQL data store and to create at two indexes. You
also must specify partition and sort keys as well as optimise the data. This includes:

 Review and select NoSQL options.


 Determine and create storage of data types
 Build and configure indexes
 Use queries and retrieve objects
 Confirm interaction of objects

Task conditions

 The purpose of this assessment task is to create three different queries, including updating,
deleting and creating data types for the NoSQL data store and to create two indexes. Also, to
specify partition and sort keys as well as optimise the data.
 This assessment task will be completed in your training organisation’s IT lab. Your
trainer/assessor will supervise you in performing this assessment task.

ICTPRG533- Manage data persistence using Page 43 of 175


NoSQL data stores
V01.2022
 The student will work as a senior software developer in this assessment. He will document
completed developments.

Business needs

To provide continuous Availability

Businesses must show up to continuous availability to cope up with different sorts of data
transactions at any point in time and in difficult situations.

To adapt to low Latency Rate

Response times must be fast enough and can handle the most intense operations for the applications.

To gain Scalability

NoSQL database must handle data partitioning across multiple servers to meet the increasing data
storage requirements.

To adapt the ability to handle changes

The schema-less structure of the NoSQL databases should come up easily with the changes coming
with time. There must be a universal index provided for structure, values and text found in the data
so that, the organisation can cope with the changes immediately using this information.

To support for Multiple Data Structures

NoSQL database must cater well to its data requirements. For example, let it be a simple object-
oriented structure or a highly complex, inter-related data structure, these unstructured databases
must well meet all kinds of data needs of the applications. Simple binary values, strings, lists to
complex parent-child hierarchies, related information values, graph stores etc., should all be well
handled with a Not-Only SQL database.

Business requirements

• Data must be stored in a key-value based data store.


• Table1 with partition key, Table2 with both partition and sort key and Table3 with global
and local indexes must be created.
• You must calculate the read-write through-puts with capacity units.
• You should use python API as IAM User.
• You must use all insert queries using AWS CLI with python API.
ICTPRG533- Manage data persistence using Page 44 of 175
NoSQL data stores
V01.2022
• Update and delete queries must be performed to any table.
• Change of events must be triggered, and notifications must be received on Supervisor’s
email after every change.
• Data must be secured and encrypted.

User and Client Access requirements

a) All Services credentials such as Account Numbers, account passwords, User names/identifiers
(user IDs) and user passwords must be kept confidential and must not be disclosed to an
unauthorised party. No one from Service Provider will ever contact you and request your
credentials.
b) If the third party or third-party software or proprietary system or software used to access
Service Provider, data/systems are replaced or no longer in use, the passwords should be
changed immediately.
c) Create a unique user ID for each user to enable individual authentication and accountability
for access to Service Provider’s infrastructure. Each user of the system access software must
also have a unique logon password.
d) User IDs and passwords shall only be assigned to authorised individuals based on the least
privilege necessary to perform job responsibilities.
e) User IDs and passwords must not be shared, posted, or otherwise divulged in any manner.
f) Develop strong passwords that are:
• Not easily guessable (i.e. your name or company name, repeating numbers and letters or
consecutive numbers and letters)
• Contain a minimum of eight (8) alphabetic and numeric characters for standard user
accounts
• For interactive sessions (i.e. non-system-to-system), ensure that passwords/passwords
are changed periodically (every 90 days is recommended)
g) Passwords (e.g. account passwords, user password) must be changed immediately when any
suspicion of the password being disclosed to an unauthorised party (see section 4.3 for reporting
requirements)
h) Ensure that passwords are not transmitted, displayed or stored in clear text; protect all end-
user (e.g. internal and external) passwords using, for example, encryption or a cryptographic
hashing algorithm also known as “one-way” encryption. When using encryption, ensure that a
strong encryption algorithm is utilised (e.g. AES 256 or above).
i) Implement password protected screensavers with a maximum fifteen (15) minute timeout to
protect unattended workstations. Systems should be manually locked before being left
unattended.

Task Instructions:

 The student will create three different queries, including updating, deleting and creating data
types for the NoSQL data store.
 You will create two indexes.
 You will specify partition and sort keys.

ICTPRG533- Manage data persistence using Page 45 of 175


NoSQL data stores
V01.2022
 You will optimise the data.

Skill test:

This assessment task requires the student:

 To create three different queries, including updating, deleting and creating data types for the
NoSQL data store.
 To create at two indexes.
 To specify partition and sort keys.
 To optimise the data.

To do so, you must complete the following activities:

 Activity 1: Confirm use and application for NoSQL according to business requirements and needs
 Activity 2: Research and compare horizontal and vertical scaling and confirm relevance and
benefit of horizontal scaling according to business requirements
 Activity 3: Research and compare NoSQL technologies and traditional relational data models
 Activity 4: Research, review and select NoSQL vendor technologies according to business
requirements
 Activity 5: Design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements
 Activity 6: Review and select required types of NoSQL data store according to business
requirements
 Activity 7: Create partition key and determine storage place of data items
 Activity 8: Review and determine required partition key and ensure effective distribution of
storage across partition
 Activity 9: Determine and select the required sort key according to business requirements
 Activity 10: Calculate, determine and configure read and write through-puts according to
business requirements
 Activity 11: Determine, configure and create indexes for optimising data retrieval queries
 Activity 12: Determine and create additional indexes
 Activity 13: Optimise data queries and retrievals for indexes according to business requirements
 Activity 14: Determine and configure time-to-live (TTL) on data objects according to business
requirements
 Activity 15: Research and select required API client for interacting with NoSQL data store
according to business requirements
 Activity 16: Substantiate and connect API client to NoSQL data store instance
 Activity 17: Insert single data object into NoSQL datastore using the selected client application
 Activity 18: Insert multiple items in a single operation
 Activity 19: Use query and select single object
 Activity 20: Use query and retrieve multiple objects in batch
 Activity 21: Perform query against the index
 Activity 22: Perform query to select required attributes and project results
 Activity 23: Delete single and multiple objects according to business requirements
 Activity 24: Update single and multiple objects according to business requirements
 Activity 25: Persist objects with different data types
 Activity 26: Configure and confirm change event triggers and notifications according to business
needs
 Activity 27: Test, fix and ensure responses and trigger notifications work according to business
requirements
ICTPRG533- Manage data persistence using Page 46 of 175
NoSQL data stores
V01.2022
 Activity 28: Review and confirm data is encrypted and authorisation and authentications are
active according to user and client access requirements
 Activity 29: Test and fix data persistence process according to business requirements
 Activity 30: Document and finalise work according to business requirements.

Task Environment:

This assessment task will be completed in a simulated environment prepared by your training
organisation.
The simulated environment will provide you with all the required resources (such as the equipment and
participants, etc.) to complete the assessment task. The simulated environment is very much like a
learning environment where a student can practice, use and operate appropriate industrial equipment,
techniques, practices under realistic workplace conditions.

The roles and responsibilities of the senior software developer are:

 To confirm use and application for NoSQL according to business requirements and needs
 To research and compare horizontal and vertical scaling and confirm relevance and benefit of
horizontal scaling according to business requirements
 To research and compare NoSQL technologies and traditional relational data models
 To research, review and select NoSQL vendor technologies according to business requirements
 To design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements
 To review and select required types of NoSQL data store according to business requirements
 To create partition key and determine storage place of data items
 To review and determine required partition key and ensure effective distribution of storage
across partition
 To determine and select required sort key according to business requirements
 To calculate, determine and configure read and write through-puts according to business
requirements
 To determine, configure and create indexes for optimising data retrieval queries
 To determine and create additional indexes
 To optimise data queries and retrievals for indexes according to business requirements
 To determine and configure time-to-live (TTL) on data objects according to business
requirements
 To research and select required API client for interacting with NoSQL data store according to
business requirements
 To substantiate and connect API client to NoSQL data store instance
 To insert a single data object into NoSQL datastore using the selected client application
 To insert multiple items in a single operation
 To use query and select a single object
 To use query and retrieve multiple objects in batch
 To perform a query against the index
 To perform a query to select required attributes and project results
 To delete single and multiple objects according to business requirements
 To update single and multiple objects according to business requirements
 To persist objects with different data types
 To configure and confirm change event triggers and notifications according to business needs
 To test, fix and ensure responses and trigger notifications work according to business
requirements
ICTPRG533- Manage data persistence using Page 47 of 175
NoSQL data stores
V01.2022
 To review and confirm data is encrypted and authorisation and authentications are active
according to user and client access requirements
 To test and fix data persistence process according to business requirements
 To document and finalise work according to business requirements.

Roles and responsibilities of trainer/supervisor are:

 To provide an open-source or commercial NoSQL database


 To provide the internet, including connectivity
 To provide required hardware, software and applications.
 To provide raw data file “sampledata.docx” provided with this assessor pack.

ICTPRG533- Manage data persistence using Page 48 of 175


NoSQL data stores
V01.2022
Activity 1: Confirm use and application for NoSQL according to business requirements and needs.

This part of the activity requires you to confirm the use and application for NoSQL according to
business requirements and needs and document the outcomes using ‘Template 1’.

Description of the activity

This activity requires you to confirm the use and application for NoSQL according to business
requirements and needs based on the information provided in the case study in cosulatation with your
assessor.

To do so you need to:

 Identify business requirements and needs and document using Template 1.


 Confirm use and application for NoSQL from the following according to business requirements
and needs and document using Template 1.
o Session Store
o User Profile Store
o Content and Metadata Store
o Mobile Applications
o Third-Party Data Aggregation
o Internet of Things
o Social Gaming
o Ad Targeting

ICTPRG533- Manage data persistence using Page 49 of 175


NoSQL data stores
V01.2022
Template 1: Confirm use and application for NoSQL according to business requirements and needs.

Identify business requirements and needs (100-150 words)

NoSQL databases are a flexible option that may be used for a range of business requirements. They
perform exceptionally well in situations that call for adaptable data models that can handle dynamic
and unstructured data, like key-value pairs or JSON. Businesses that require scalability will find
NoSQL databases especially beneficial as they offer smooth horizontal expansion to accommodate
increasing data quantities and user traffic. NoSQL databases are useful for high-performance
applications with low latency needs, such as gaming platforms and real-time analytics. They are also
appropriate for applications ranging from content management to IoT platforms because to their
distributed and fault-tolerant architecture and support for various data models.

NoSQL databases are the go-to option for companies with dynamic data difficulties looking for
scalable, effective, and flexible solutions because they perform best in settings where quick
development cycles, flexible schema evolution, and affordable storage are critical.

ICTPRG533- Manage data persistence using Page 50 of 175


NoSQL data stores
V01.2022
Use and Application for NoSQL (350-400 words)

Session Store

Applications utilizing session stores benefit greatly from NoSQL databases like Redis or MongoDB.
They are ideal for storing session data in web applications because of their low latency and high read
and write loads. NoSQL databases' quick retrieval times and adaptable schemas help with effective
session management, guaranteeing smooth user experiences in dynamic, scalable settings.

User Profile Store

User profile stores work nicely with NoSQL databases like Cassandra or MongoDB. They give you the
freedom to deal with a wide range of user data, including different traits and preferences. NoSQL
databases are perfect for dynamic, scalable systems like social networks or e-commerce platforms
where user information is diverse and always changing because of their effective read and write
capabilities, which provide speedy access to user profiles.

Content and Metadata Store

Applications involving the storage of metadata and content work well with NoSQL databases like
Couchbase and Elasticsearch. Their scalable and rapid access enables effective metadata retrieval,
and their flexible schema supports a wide range of content formats. Because of this, NoSQL
databases are perfect for apps, media platforms, and content management systems where managing
unstructured data and related metadata is essential to providing dynamic and responsive user
experiences.

Mobile Applications

NoSQL databases are essential to mobile applications, like Firebase or Realm. They are ideal for
mobile app development because of their versatility in handling different sorts of data, support for
offline functioning, and horizontal scalability. NoSQL databases guarantee smooth data
synchronization between devices, improving user experiences in messaging apps, real-time
collaborative apps, and other situations where scalable and flexible data storage is necessary for
mobile operation.

Third-Party Data Aggregation

MongoDB and Apache Cassandra are two examples of NoSQL databases that are essential to third-
party data aggregation. Their adaptable schema architecture supports a wide range of data formats,
simplifying the assimilation and processing of data from many sources. NoSQL databases make it
possible to store and retrieve aggregated data efficiently, which makes it easier to integrate different
datasets in applications like business intelligence tools, analytics platforms, and systems that need
extensive third-party data aggregation.

Internet of Things

ICTPRG533- Manage data persistence using Page 51 of 175


NoSQL data stores
V01.2022
NoSQL databases are essential for Internet of Things (IoT) applications, such MongoDB or Apache
Cassandra. In IoT applications, their capacity to manage enormous amounts of time-series data and
facilitate horizontal scalability guarantees effective storage and retrieval. NoSQL databases are
perfect for applications where flexibility, scalability, and low latency data access are essential, such
as smart devices, sensor networks, and Internet of Things platforms, since they allow real-time
processing.

Social Gaming

NoSQL databases—like Redis or MongoDB—are essential to social gaming apps. Dynamic player data,
scoring, and in-game interactions are supported by their scalability and low-latency retrieval. Real-
time updates, customized gaming experiences, and effective management of user-generated material
are all made possible by NoSQL databases. This makes them perfect for social gaming, guaranteeing
fluidity and responsiveness in settings where quick access to data and adaptability are critical.

Ad Targeting

Ad targeting systems require NoSQL databases, such as Amazon DynamoDB or Apache Cassandra.
They provide dynamic ad personalization through their capacity to manage enormous volumes of
customer data and offer real-time access. In the ever evolving world of online advertising, NoSQL
databases make it easier to store and retrieve a wide range of user preferences and behaviors,
guaranteeing efficient targeting and optimum ad delivery.

ICTPRG533- Manage data persistence using Page 52 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 1
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Identified business requirements and needs.  

b) Confirmed use and Application for NoSQL.  


 Session Store
 User Profile Store
 Content and Metadata Store
 Mobile Applications
 Third-Party Data Aggregation
 Internet of Things
 Social Gaming
 Ad Targeting

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

Activity 2: Research and compare horizontal and vertical scaling and confirm the relevance and benefit
of horizontal scaling according to business requirements.

ICTPRG533- Manage data persistence using Page 53 of 175


NoSQL data stores
V01.2022
This part of the activity requires you to research and compare horizontal and vertical scaling and
confirm the relevance and benefit of horizontal scaling according to business requirements and
document the outcomes using ‘Template 2’.

Description of the activity

This activity is a continuation of Activity 1.

This activity requires you to research and compare horizontal and vertical scaling and confirm the
relevance and benefit of horizontal scaling according to business requirements based on the information
provided in the case study.

To do so you need to:

 Research and document horizontal and vertical scaling and document using Template 2.
 Compare horizontal and vertical scaling based on the following factors and document using
Template 2:
o Databases
o Downtime
o Concurrency
o Message passing
 Determine benefits of horizontal scaling and document using Template 2.

ICTPRG533- Manage data persistence using Page 54 of 175


NoSQL data stores
V01.2022
Template 2: Research and compare horizontal and vertical scaling and confirm the relevance and
benefit of horizontal scaling according to business requirements.

Research and compare horizontal and vertical scaling and confirm relevance and benefit of
horizontal scaling according to business requirements (200-250 words)

Horizontal Scaling Vertical Scaling


(scaling out) (scaling-up)

Databases  Database performance is The power of a single server is


improved through horizontal increased by vertical scaling, which
scaling, which divides the improves database performance. It is
workload among several necessary to ensure better capacity for
servers. It's essential for expanding business requirements as
satisfying expanding business well as to satisfy urgent resource
needs, guaranteeing smooth needs.
scalability, and preserving
peak performance.

Downtime With horizontal scalability, Vertical scaling increases the capacity


companies may add more servers as of a single server, reducing downtime.
needed, reducing downtime. It Immediate resource expansion,
guarantees better fault tolerance, ongoing availability, and meeting
smooth expansion, and constant business needs without affecting
availability for changing business operations all depend on it.
needs.

Concurrency In commercial applications, For business applications, vertical


managing higher concurrency scaling is useful in managing higher
requires the use of horizontal concurrency. It increases the capacity
scaling. It enables the workload to of a single server so that concurrent
be divided among several servers, user requests and transactions can be
guaranteeing effective management processed effectively without
of simultaneous user requests and sacrificing performance.
transactions.

Message In commercial applications, By increasing the capacity of a single


passing horizontal scaling is essential for server, vertical scaling guarantees
effective message conveyance. By effective message passing. It is
spreading out the workload for pertinent to managing heightened

ICTPRG533- Manage data persistence using Page 55 of 175


NoSQL data stores
V01.2022
message processing across several communications demands, delivering
servers, it guarantees scalability and rapid resource enhancements, and
top performance in dynamic reducing any bottlenecks.
scenarios.

Examples Businesses that must manage For enterprises with sporadic resource
higher user traffic might consider requirements, vertical scaling is
horizontal scaling. E-commerce essential. For example, a database
platforms have the capability to server with more RAM may handle
allocate workload among servers in increasing demands without becoming
order to guarantee smooth more sophisticated, guaranteeing peak
scalability even during moments of performance.
high demand.

Benefits of horizontal Scaling (80-100 words)

Benefits of Horizontal Scaling:

Businesses can achieve more capacity and performance with orizontal scaling, which divides workloads
among several servers. It guarantees high availability, improves fault tolerance, and easily handles
increasing data and user needs. Improved performance during periods of peak usage, economical
scalability, and optimal resource utilization are all made possible by this me

Benefits of Vertical Scaling:

A single server can instantly have its resources upgraded through vertical scaling, increasing both its
capacity and performance. This method offers a simple solution without the hassles of managing
several servers, making it advantageous for applications experiencing abrupt spikes in demand. It
guarantees rapid scalability, flexibility in response to shifting workloads, and little interference with
daily operations.

ICTPRG533- Manage data persistence using Page 56 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 2
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Researched and determined Comparison of


horizontal and vertical scaling.  

b) Confirmed Benefits of horizontal Scaling.


 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 57 of 175


NoSQL data stores
V01.2022
Activity 3: Research and compare NoSQL technologies and traditional relational data models.

This part of the activity requires you to research and compare NoSQL technologies and traditional
relational data models and document the outcomes using ‘Template 3’.

Description of the activity

This activity is a continuation of Activity 2.

This activity requires you to research and compare NoSQL technologies and traditional relational data
models.

To do so, you need to:

 Conduct online research using web browsers and document comparison between NoSQL
technologies and traditional relational data models using Template 3.

ICTPRG533- Manage data persistence using Page 58 of 175


NoSQL data stores
V01.2022
Template 3: Researching and comparing NoSQL technologies and traditional relational data models.

Comparison between NoSQL technologies and traditional relational data models (150-200
words)

NoSQL Database Relational Database

Flexible, schema-free architecture that can a tabular, structured format with pre-established
handle data that is semi-structured or schemas that uses relationships to enforce data
unstructured. integrity.

Horizontally scalable, making it simple to vertically scalable; usually involves boosting a


distribute data among several servers. single server's capacity.

Because there is no fixed schema, it can be less complicated queries are common, particularly
complex, allowing for speedy and flexible when involving complicated interactions between
queries. tables.

Maybe more consistent in the long run than It is perfect for applications demanding data
strictly adhering to ACID. Ideal in situations integrity for complicated transactions since it
when instant consistency is not necessary. guarantees robust ACID compliance.

Perfect for distributed systems, content ideal for systems with complicated transactions,
management systems, real-time analytics, and enterprise resource planning (ERP), financial
other applications with dynamic and changing systems, and other applications with well-defined
data structures. and reliable data structures.

ICTPRG533- Manage data persistence using Page 59 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 60 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 3
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Researched and Compared NoSQL  


technologies and traditional relational
data models.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 61 of 175


NoSQL data stores
V01.2022
Activity 4: Research, review and select NoSQL vendor technologies according to business requirements.

This part of the activity requires you to research, review and select NoSQL vendor technologies
according to business requirements and document the outcomes using ‘Template 4’.

Description of the activity

This activity is a continuation of Activity 3.

This activity requires you to Research, review and select NoSQL vendor technologies according to
business requirements based on the information provided in the case study.

This activity requires you to use a web browser and do online research.

To do so, you need to:

 Research following type of NoSQL vendor technologies based on their strengths and
weaknesses and document using Template 4:
o Document store
o Wide-column store
o Key-value store
 Review and select NoSQL vendors according to their strengths and weakness and document
using Template 4.

ICTPRG533- Manage data persistence using Page 62 of 175


NoSQL data stores
V01.2022
Template 4: Research, review and select NoSQL vendor technologies according to business
requirements.

Research, review and select NoSQL vendor technologies according to business


requirements (200-250 words)

 Document store

Document repository MongoDB and other NoSQL vendor technologies store data as flexible
documents that resemble JSON. These databases work effectively in situations where the
data structures are dynamic and ever-changing. For example, MongoDB shines in this area,
providing scalability, developer-friendliness, and compatibility for a wide range of data
formats. It enables companies to effectively store and retrieve data in a manner consistent
with the format of their papers.

 Strengths –

Document repository NoSQL databases, like MongoDB, provide advantages like adaptable
schema design that can handle changing and dynamic data structures. Their scalability is
excellent, enabling effective horizontal scaling to accommodate increasing data volumes.
With documents that resemble JSON, document storage also support agile development by
making it simple to adjust to changing requirements. These databases work effectively in
situations requiring a variety of data formats and rapid development times.

 Weakness –

Document repository One of the major drawbacks of NoSQL databases, like MongoDB, is
that denormalization may result in greater storage space, which could complicate updates.
It can be difficult to maintain data consistency between pages, and sophisticated queries
might execute more slowly than those in conventional relational databases. Developers
switching from relational databases to document stores' flexible schema model may also
encounter a learning curve.

 Wide-column store

broad-column retailer Data is arranged in tables using NoSQL vendor technologies such
as Apache Cassandra, where rows are uniquely identified by keys and columns are
subject to row variation. Because of its high availability, scalability, and flexibility, this
architecture can handle massive volumes of time-series data. Fault tolerance and
continuous availability are ensured by Apache Cassandra's decentralized architecture and
distributed structure, particularly in distributed and highly scalable applications.

ICTPRG533- Manage data persistence using Page 63 of 175


NoSQL data stores
V01.2022
 Strengths –

broad-column retailer NoSQL databases, like Apache Cassandra, have advantages including
fault tolerance, high availability, and scalability. They guarantee continuous data availability
even in dispersed situations thanks to their decentralized architecture. Because of its
adaptable schema, which allows for dynamic column addition, they can handle substantial
amounts of time-series data. When real-time applications and effective distributed data
management are required, wide-column stores perform exceptionally well.

 Weakness –

broad-column retailer One of the possible drawbacks of NoSQL databases, such as Apache
Cassandra, is that their intricate setup and configuration may require a learning curve.
Multiple table transactions can be complex, and real-time analytics may encounter
difficulties. Furthermore, use cases needing intricate connections or situations requiring
instantaneous consistency across numerous dispersed nodes might not be the best fit for
wide-column storage.

 Key-value store

Crucial value retailer Simple key-value pairs are used by NoSQL databases, such as Redis
and Amazon DynamoDB, to arrange data. Fast and effective data retrieval is made possible
by this structure's minimalism. Key-value stores can be used in situations where there is a
need for rapid access to unstructured or semi-structured data since they are extremely
scalable. Their superior performance and ease of use in use cases such as distributed
systems, caching, and session management allow for high-throughput access to stored
data.

 Strengths –

Crucial value retailer Redis and other NoSQL databases are known for their scalability,
speed, and simplicity. They are excellent at delivering quick data retrieval via direct key
access, which makes them perfect for session management and caching. Key-value stores
provide high-performance access to stored data by effectively managing large datasets and
dispersed systems. Their straightforward design facilitates agile development for
applications with a range of data requirements and makes implementation easier.

 Weakness –

ICTPRG533- Manage data persistence using Page 64 of 175


NoSQL data stores
V01.2022
Crucial value retailer The shortcomings of NoSQL databases, like Redis, are their inability to
support complicated data relationships and their restricted querying capabilities.
Applications requiring complex data analytics or querying might not be a good fit for them.
Furthermore, situations where more complicated data structures or relationships need to be
efficiently stored and searched may find the key-value model's simplicity to be a hindrance.

ICTPRG533- Manage data persistence using Page 65 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 4
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Researched NoSQL vendor technologies:


 
 Document store (MongoDB,
Couchbase DB)
 Wide-column store (Cassandra,
HBase)
 Key-value store (Redis, Memcached)

b) Reviewed and selected NoSQL vendors  


according to their strengths and weakness.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 66 of 175


NoSQL data stores
V01.2022
Activity 5: Design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements.

This part of the activity requires you to design and determine data storage requirements from the
NoSQL data store according to selected vendor technology and business requirements and document
the outcomes using ‘Template 5’.

Description of the activity

This activity is a continuation of Activity 4.

This activity requires you to design and determine data storage requirements from the NoSQL data
store according to selected vendor technology and business requirements based on the information
provided in the case study.

This activity requires you to use an AWS account and access the DynamoDB service selected in Activity
4.

To do so, you need to:

 Design and determine data storage requirements from NoSQL data store based on the following
scenarios:
o Data size
o Data shape
o Data velocity

Further, you must:


 Take a screenshot of each step implemented to design and determine data storage requirements
using NoSQL data store and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to design and determine data storage requirements from the
NoSQL datastore using Template 5.

ICTPRG533- Manage data persistence using Page 67 of 175


NoSQL data stores
V01.2022
Template 5: Design and determine data storage requirements from the NoSQL data store .

Design and determine data storage requirements from NoSQL data store (DynamoDB)
(150-200 words)

Using the selected vendor technology, customize your NoSQL data store taking business
requirements into account. For flexibility, choose a document-oriented database such as
MongoDB. Create a productive data model that uses suitable indexing and is in line with
business entities. Modify storage configurations to satisfy scalability and performance
requirements, including cache size and compression. Install security measures in accordance
with corporate needs, set up backup plans, and keep an eye out for peak performance. Assess
data expansion on a regular basis, modify storage plans to meet changing company needs,
and keep your NoSQL solution flexible and strong.

Data size:

Using MongoDB, create a NoSQL document store based on the size and requirements of the business.
Make use of the adaptable indexing techniques, storage engine configurations, and schema offered by
MongoDB. Plan for scalability, prioritize read and write efficiency, and put security measures in place.
Maintain regular performance monitoring and tuning to make sure it's in line with company needs and
the expansion of data.
Data shape:
Use a document store, such as MongoDB, to store NoSQL data in accordance with business
requirements. Create a data model that can handle a variety of data shapes that fits the document-
oriented structure. A flexible and effective solution for a range of data structures and changing
business needs can be achieved by optimizing storage configurations, indexing, and security
procedures in accordance with business requirements.
Data velocity:
Use a document store for NoSQL data storage that can support different data velocities and business
needs, like MongoDB. Create a data model that is effective and able to manage data with high
velocity. Optimize indexing and storage configurations according to business needs to provide smooth
scalability for high-velocity, real-time data processing.

ICTPRG533- Manage data persistence using Page 68 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 5
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Designed and determined data storage


requirements from NoSQL data store  
(DynamoDB)
 Data size
 Data shape
 Data velocity

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 69 of 175


NoSQL data stores
V01.2022
Activity 6: Review and select required types of NoSQL data store according to business requirements.

This part of the activity requires you to review and select required types of NoSQL data store according
to business requirements and document the outcomes using ‘Template 6’.

Description of the activity

This activity is a continuation of Activity 5.

This activity requires you to review and select required types of NoSQL data store according to business
requirements based on the information provided in the case study.

To do so, you need to:

 Review following types of NoSQL datastore and document using Template 6:


o Column-oriented database
o Key-value database
o Document database
o Graph database
 Select Key-value data-based according to business requirements and document using Template
6.

ICTPRG533- Manage data persistence using Page 70 of 175


NoSQL data stores
V01.2022
Template 6: Reviewing and selecting required types of NoSQL data store according to business
requirements.

Types of NoSQL data store (300-350 words)

Column-oriented database

One kind of NoSQL data store that has


significant advantages in some business contexts On the other hand, conventional row-based
are column-oriented databases. They work well databases might be better appropriate for
for handling massive amounts of data with transactional systems that require frequent
intricate queries because they are made for changes. Consequently, companies looking for
analytics and data warehousing. They are strong reporting and analytics features frequently
particularly useful for read-intensive workloads discover that column-oriented databases—such as
and analytical operations, where businesses Apache Cassandra or HBase—fit their needs
need quick query speed. The architecture perfectly.
optimizes the speed at which data can be
retrieved by enabling the efficient compression
and retrieval of particular columns.

Key-value database

Key-value databases are a subset of NoSQL data They might not be the best option, though, for
storage designed to meet certain business intricate relationships or queries. Key-value
requirements. They provide excellent read and databases frequently match easily with the needs
write performance and scalability in basic data of businesses that prioritize scalability, simplicity,
models. Key-value databases, such as Redis or and speed—such as those that demand real-time
Amazon DynamoDB, are perfect for situations applications or cache layers—by offering scalable
where you need to retrieve data quickly based and effective data access.
on unique keys. They are also appropriate for
applications that need session storage or
caching.
Document database

A subset of NoSQL data stores known as Document databases offer effective searching and
document-oriented databases are very helpful to indexing for applications with intricate data
companies whose data structures are dynamic linkages and a variety of formats. Document-
and ever-changing. Well-known examples are oriented databases are a good fit for businesses
MongoDB and CouchDB, which store data in with changing needs, including content
adaptable documents like JSON and make management systems and e-commerce platforms,
managing a wide range of data easy. Because since they provide performance and flexibility for
these databases offer both dynamic schema a wide range of data kinds and formats.
updates and horizontal scaling, they are suitable
for enterprises that need to be scalable.

Graph database
Graph databases—a kind of NoSQL data store—
ICTPRG533- Manage data persistence using Page 71 of 175
NoSQL data stores
V01.2022
are great at handling intricate relationships However, other NoSQL kinds can be more
between entities, which makes them perfect for appropriate for less complex, non-relational data.
companies that emphasize data that is related. Graph databases find a place in applications
Neo4j and Amazon Neptune are two notable where precisely and quickly achieving business
instances. These databases handle situations like objectives depends on comprehending and
social networks, fraud detection, or utilizing relationships among data elements.
recommendation engines by utilizing graph
topologies and queries to quickly browse and
evaluate complex connections. Businesses can
make more informed decisions by gaining
insights into relationships through the use of
graph databases.

ICTPRG533- Manage data persistence using Page 72 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 6
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Reviewed types of NoSQL datastore:  


 Column-oriented database
 Key-value database
 Document database
 Graph database

b) Selected Key-value data-based  


according to business requirements.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 73 of 175


NoSQL data stores
V01.2022
Activity 7: Create partition key and determine storage place of data items.

This part of the activity requires you to create a partition key and determine the storage place of data
items, and document the outcomes using ‘Template 7’.

Description of the activity

This activity is a continuation of Activity 6.

This activity requires you to create a partition key and determine the storage place of data items.

This activity requires you to use an AWS account and access the DynamoDB service.

To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.

To do so, you n eed to:

• Create a Table from the “sampledata.docx” file and give it the name ‘Table1’.
• Create partition key with the name ‘ID’.
• Add attributes and values for items in ‘Table1’ by taking reference from “sampledata.docx”.
• Determine the storage place of data items using the Scan or Query button.

Further, you must:


 Take a screenshot of each step implemented to create a partition key and determine the storage
place of data items and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to create a partition key and determine the storage place of
data items using Template 7.

ICTPRG533- Manage data persistence using Page 74 of 175


NoSQL data stores
V01.2022
Template 7: Create partition key and determine storage place of data items .

Create partition key example (50-80 words)

A partition key is an important notion in the context of databases, particularly in NoSQL databases

like DynamoDB. Data is divided among several storage nodes using the partition key, allowing for

scalability and effective data retrieval. Here's a generic how-to for making a partition key and

figuring out where data objects are stored:

What a Partition Key Is:

Choose a field from your dataset to use as the partition key. To spread data among several

partitions, utilize this field.To lessen the possibility of "hot" partitions with excessively high data

access, use a field that distributes data equally.

Make a DynamoDB Table:

 Using your AWS account, access the DynamoDB console.

 Then select "Create Table."

 Type the name of the table here.

Indicate the Partition Key here.

 Define the partition key in the "Primary key" section.

 It's usually a good idea to use a natural property found in your data, such as a product

code or user ID, to function as a unique identifier.

ICTPRG533- Manage data persistence using Page 75 of 175


NoSQL data stores
V01.2022
Determine storage place of data items (180-200 words)

The partition key in NoSQL databases, such as DynamoDB, determines where data items are
stored. This is how it operates:
Dividends:
Using a partition key, DynamoDB divides data among several servers.
Every partition has a unique throughput and storage capacity.
Dividend Key:
Every element in the table has a distinct identity thanks to the partition key.
The partition where the data item is kept is identified by DynamoDB using the value of the
partition key.
Distribution:
A partition stores data objects that have the same value for the partition key.
In order to avoid hotspots or uneven data distribution, DynamoDB distributes things across
partitions in an equitable fashion.
Best Possible Performance:
Selecting an appropriate partition key is essential for uniform distribution and maximum
efficiency.
In order to maximize performance, it makes sure that read and write operations are

ICTPRG533- Manage data persistence using Page 76 of 175


NoSQL data stores
V01.2022
distributed equally among partitions.
Re scaling:
By adding or deleting partitions in response to changes in data volume and throughput needs,
DynamoDB may be scaled dynamically.
As a result, scaling is possible without compromising performance.
Subordinate Indexes:
DynamoDB separates partitions for index items according to the index key if you utilize
secondary indexes.
Improving query performance requires an understanding of secondary index partitioning.
Query Effectiveness:
Effective searches use the partition key to obtain items directly from a partition.
Operations are extremely predictable and performant when querying using the partition key.
Taking into account:
To prevent hot partitions, select a partition key that distributes data access patterns equally.
DynamoDB can grow more efficiently if the distribution of your data access is uniform across
all partition key values.
Summary:
In conclusion, the partition key that is selected has a significant impact on where data objects
are stored in DynamoDB. Understanding DynamoDB's partitioning method and choosing the
right partition key will help you optimize data storage and access patterns for maximum
scalability and speed

Performance Criteria/Performance Checklist: Activity 7

ICTPRG533- Manage data persistence using Page 77 of 175


NoSQL data stores
V01.2022
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Created Table and given its name


‘Table1’.  

b) Created partition key with name ‘ID’.


 

c) Added attributes and values for items in  


‘Table1’ by taking refernce from
“sampledata.docx”.

d) Determined the storage place of data


items using the Scan or Query button.  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 78 of 175


NoSQL data stores
V01.2022
Activity 8: Review and determine required partition key and ensure effective distribution of storage
across the partition.

This part of the activity requires you to review and determine the required partition key and ensure
effective distribution of storage across the partition, and document the outcomes using ‘Template 8’.

Description of the activity

This activity is a continuation of Activity 7.

This activity requires you to review and determine the required partition key and ensure effective
distribution of storage across the partition.

This activity requires you to use an AWS account and access the DynamoDB service.

To do so, you need to:

 Review the required partition key created in Activity 7.


• Ensure effective distribution of storage across partition by observing the items and their
attributes separately with their respective partition key.

Further, you must:


 Take a screenshot of each step implemented to review and determine the required partition key
and ensure effective distribution of storage across the partition and submit it to the
trainer/assessor via e-mail.
 Document the steps implemented to review and determine the required partition key and ensure
effective distribution of storage across the partition using Templa8.

ICTPRG533- Manage data persistence using Page 79 of 175


NoSQL data stores
V01.2022
Template 8: Review and determine the required partition key and ensure effective distribution of
storage across the partition.

Review the partition key created in Activity 7 (100-150 words)

Data Distribution: Partition Key

ID FName LName

001 John Smith

002 Jaffery Jones

003 Caleb Peter

004 Narrin Brown

005 Diana johnson

ICTPRG533- Manage data persistence using Page 80 of 175


NoSQL data stores
V01.2022
Ensure effective distribution of storage across partition observing the items and their attributes

 I have created a table named as ‘table1’.


 I create distribution key named as ‘1d’.
 There are 2 attributes i.e; FName and LName.
 I have given values to all the attribues as well.
 ALll the ids have first name and last name respectively.
 Ids are 001,002,003,004 and 005.
 FName of id 001,002,003,004,005 and are john,Jeffery,caleb, Narrin and Diana.
 LName of id 001,002,003,004,005 and are Smith, jones,peter,Brown and Johnson.

ICTPRG533- Manage data persistence using Page 81 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 8
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Reviewed the required partition key


created in Activity 7.  
b) Ensured effective distribution of storage
across partition by observing the items and  
their attributes separately with their
respective partition key.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

Activity 9: Determine and select the required sort key according to business requirements.

ICTPRG533- Manage data persistence using Page 82 of 175


NoSQL data stores
V01.2022
This part of the activity requires you to determine and select the required sort key according to
business requirements and document the outcomes using ‘Template 9’.

Description of the activity

This activity is a continuation of Activity 8.

This activity requires you to determine and select the required sort key according to business
requirements based on the information provided in the case study.

This activity requires you to use an AWS account.

To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.

To do so, you need to:

• Determine and select the required sort key using the below steps:
o Create another table with the name ‘Table 2’ from taking data reference from
“sampledata.docx”.
o Create Partition key ‘ID’.
o Select and Create Sort Key ‘age’ from table 2.
o Add attributes and values for items in ‘Table2’ by taking reference from
“Sampledata.docx”.

Further, you must:


 Take a screenshot of each step implemented to determine and select the required sort key
according to business requirements and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to determine and select the required sort key according to
business requirements using Template 9.

ICTPRG533- Manage data persistence using Page 83 of 175


NoSQL data stores
V01.2022
Template 9: Information on determining and selecting required sort key according to business
requirements.

Determine and select the required sort key (50-80 words)

In DynamoDB tables in particular, a sort key is necessary for effective data organization and
querying. It makes it possible to arrange items inside a partition according to particular
criteria. This facilitates the retrieval of material in a sorted sequence and allows range
searches. In NoSQL databases, sort keys improve query flexibility by enabling the retrieval
of items depending on a range of values. This leads to better performance and customized
data retrieval.
In Table2 we have create ‘Age’ as a sort key to be distributed data in a correct manner.

ICTPRG533- Manage data persistence using Page 84 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 85 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 9
ICTPRG533- Manage data persistence using Page 86 of 175
NoSQL data stores
V01.2022
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Created another Table with the name


‘Table2’.  

b) Created Partition key ‘ID’.


 

c) Created Sort Key ‘age.


 

d) Added attributes and values for items in  


‘Table2’ by taking refernce from
“sampledata.docx”.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

Activity 10: Calculate, determine and configure read and write through-puts according to business
requirements.

This part of the activity requires you to calculate, determine and configure read and write through-puts
according to business requirements and document the outcomes using ‘Template 10’.
ICTPRG533- Manage data persistence using Page 87 of 175
NoSQL data stores
V01.2022
Description of the activity

This activity is a continuation of Activity 9.

This activity requires you to calculate, determine and configure read and write through-puts according
to business requirements based on the information provided in the case study.

This activity requires you to use an AWS account with DynamoDB and CloudWatch services.

To do so, you need to:

• Click on Table2 in the dashboard.


• Move to the monitor tab on the right panel of the table metrics.
• Calculate read and write through-puts from CloudWatch metrics Capacity Units.
• Determine and configure read and write through-puts.

Further, you must:


 Take a screenshot of each step implemented to calculate, determine and configure read and
write through-puts according to business requirements and submit it to the trainer/assessor via
e-mail.
 Document the steps implemented to calculate, determine and configure read and write through-
puts according to business requirements using Template 10.

ICTPRG533- Manage data persistence using Page 88 of 175


NoSQL data stores
V01.2022
Template 10: Information on calculating, determining and configuring read and write through-puts
according to business requirements.

Read-Write throughput metrics from Capacity Units (80-100 words)

Determine and configure read and write through-puts.

 Read throughput
Read Capacity Units (RCUs) are used in Amazon DynamoDB to assess read performance metrics.
One highly consistent read per second or two eventually consistent reads are equal to one RCU.
RCUs give users the ability to allocate and modify resources to match their DynamoDB tables'
unique read performance requirements.

 Write throughput
Write Capacity Units (WCUs) are used in Amazon DynamoDB to measure write performance
metrics. For items up to 1 KB in size, one WCU denotes the ability to execute one write operation
per second. With WCUs, users can provide and modify resources to satisfy their DynamoDB
tables' unique write performance requirements.

Performance Criteria/Performance Checklist: Activity 10


ICTPRG533- Manage data persistence using Page 89 of 175
NoSQL data stores
V01.2022
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Clicked on the Table2 in the dashboard.


 
b) Moved to the monitor tab on the right
panel of the table metrics.  

c) Calculated read and write through-puts


from CloudWatch metrics Capacity Units.  

d) Determined and configured read and


write through-puts.  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

Activity 11: Determine, configure and create indexes for optimising data retrieval queries.

ICTPRG533- Manage data persistence using Page 90 of 175


NoSQL data stores
V01.2022
This part of the activity requires you to determine, configure and create indexes for optimising data
retrieval queries and document the outcomes using ‘Template 11’.

Description of the activity

This activity is a continuation of Activity 10.

This activity requires you to determine, configure and create two (2) indexes for optimising data
retrieval queries.

This activity requires you to use an AWS account with the DynamoDB service.

To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.

To do so, you need to:

• Determine and Create a Table with the name ‘Table3’.


• Select Customise settings and scroll down.
• Enable Autoscaling.
• Create and configure a global index name ‘id-age-index’.
• Create and configure a local index name ‘age-index’.

Further, you must:


• Take a screenshot of each step implemented to determine, configure and create indexes for
optimising data retrieval queries and submit it to the trainer/assessor via e-mail.
• Document the steps implemented to determine, configure and create indexes for optimising data
retrieval queries using Template 11.

ICTPRG533- Manage data persistence using Page 91 of 175


NoSQL data stores
V01.2022
Template 11: Information on determining, configuring and creating two (2) indexes for optimising data
retrieval queries.

Determine, configure and create two (2) indexes for optimising data retrieval queries (80-
100 words)

Analyze query patterns to find commonly run queries before optimizing data retrieval queries. Based on

JOIN operations and filtering conditions, select the relevant columns. For important filtering attributes,

create single-column indexes using the syntax CREATE INDEX idx_column1 ON your_table(column1).

For queries with numerous conditions, take into account composite indexes as well. For example,

CREATE INDEX idx_column1_column2 ON your_table(column1, column2). Respect data kinds, keep an

eye on index utilization, and update statistics on a regular basis. Analyze and test how indexes affect

the speed of queries. Make use of database tools to get optimization recommendations. As the

database changes, make important to periodically review and modify your indexing strategies to

maintain their effectiveness.

ICTPRG533- Manage data persistence using Page 92 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 93 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 11

ICTPRG533- Manage data persistence using Page 94 of 175


NoSQL data stores
V01.2022
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Determined and Created a Table with


the name ‘Table3’.  
b) Selected Customise settings and scroll
down.  
c) Enabled Autoscaling.
 
d) Created and configured a global index
name ‘id-age-index’.  
e) Created and configured a local index
name ‘age-index’.  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

Activity 12: Determine and create additional indexes.

This part of the activity requires you to determine and create additional indexes and document the
outcomes using ‘Template 12’.

ICTPRG533- Manage data persistence using Page 95 of 175


NoSQL data stores
V01.2022
Description of the activity

This activity is a continuation of Activity 11.

This activity requires you to determine and create additional indexes.

This activity requires you to use an AWS account with the DynamoDB service.

To do so, you need to:

• Click on ‘Table3’ from the dashboard.


• Move to the indexes tab.
• Click on create the index with the name ‘id-index’.
• Configure the index.
• Enable Autoscaling if disabled.
• Click create.

Further, you must:


• Take a screenshot of each step implemented to determine and create additional indexes and
submit it to the trainer/assessor via e-mail.
• Document the steps implemented to determine and create additional indexes using Template
12.

ICTPRG533- Manage data persistence using Page 96 of 175


NoSQL data stores
V01.2022
Template 12: Information on determining and creating additional indexes.

Determine and create additional indexes (50-100 words)

Identify locations with high resource usage or sluggish response times by analyzing query execution
plans and creating additional indexes for improved database performance. Pay attention to columns
that are commonly used in JOIN conditions or WHERE clauses. To create indexes for selected columns,
use SQL commands such as CREATE INDEX, taking into account single-column and composite indexes
according to query needs. Track database performance and modify indexing tactics as necessary.
Evaluate and reevaluate how new indexes affect the execution of queries on a regular basis. As the
database changes, experiment with various indexing setups and make use of database tools to optimize
indexes for maximum efficiency in data retrieval.

ICTPRG533- Manage data persistence using Page 97 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 98 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 12
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Clicked on ‘Table3’ from the dashboard.


 

b) Moved to the indexes tab.


 

c) Clicked on the created index with the


name ‘id-index’.  

d) Configured the index.


 

e) Enabled Auto scaling if disabled.


 

f) Clicked create.
 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

ICTPRG533- Manage data persistence using Page 99 of 175


NoSQL data stores
V01.2022
Student signature

Observer signature

Activity 13: Optimise data queries and retrievals for indexes according to business requirements.
ICTPRG533- Manage data persistence using Page 100 of 175
NoSQL data stores
V01.2022
This part of the activity requires you to optimise data queries and retrievals for indexes according to
business requirements and document the outcomes using ‘Template 13’.

Description of the activity

This activity is a continuation of Activity 12.

This activity requires you to optimise data queries and retrievals for indexes according to business
requirements based on the information provided in the case study.

This activity requires you to use an AWS account with the DynamoDB service.

To do so, you need to:

• Optimise data queries and Retrievals using the below techniques for indexes created in the
activity ‘11’ and activity ‘12’:
o Filtered data retrieval
o Sorted data retrieval
o Range-based data retrieval

Further, you must:


 Take a screenshot of each step implemented to optimise data queries and retrievals for indexes
according to business requirements and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to optimise data queries and retrievals for indexes according
to business requirements using Template 13.

ICTPRG533- Manage data persistence using Page 101 of 175


NoSQL data stores
V01.2022
Template 13: Optimise data queries and retrievals for indexes according to business requirements.

Optimise data queries and Retrievals using the below techniques (100-150 words)

Filtered data retrieval:

Use range queries, indexing to filter columns, and WHERE clauses to optimize data queries. For
effective data retrieval, make use of pagination, aggregation functions, and full-text search. Think
about stored processes, caching, and frequent observation. Use storage optimization and
compression strategies, as well as database-specific features, to improve filtered data retrieval in a
focused and effective way.

Sorted data retrieval:


Sort retrieved data effectively to improve data queries. For speedy sorting, utilize ORDER BY clauses
with indexed columns. Use pagination to restrict results and cut down on retrieval times. Investigate
caching strategies and use compression to improve efficiency. To guarantee effective and organized
data retrieval in databases, monitor and adjust queries on a regular basis.

Range-based data retrieval:


By defining data ranges using WHERE clauses, you can maximize data queries with range-based
retrieval. For quicker access, create indexes on the columns used in range conditions. Use pagination
to reduce query load and limit results. To guarantee effective and focused data retrieval within
predetermined ranges, monitor and modify the plan on a regular basis.

ICTPRG533- Manage data persistence using Page 102 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 13
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Optimised data queries and Retrievals


using the below techniques for indexes:  
• Filtered data retrieval
• Sorted data retrieval
• Range-based data retrieval

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 103 of 175


NoSQL data stores
V01.2022
Activity 14: Determine and configure time-to-live (TTL) on data objects according to business
requirements.

This part of the activity requires you to determine and configure time-to-live (TTL) on data objects
according to business requirements and document the outcomes using ‘Template 14’.

Description of the activity

This activity is a continuation of Activity 13.

This activity requires you to determine and configure time-to-live (TTL) on data objects according to
business requirements based on the information provided in the case study.

This activity requires you to use an AWS account with the DynamoDB service.

To do so, you need to:

• Determine and Configure time-to-live (TTL) on data objects, i.e. data items using the below
steps:
o Login to AWS Management Console
o Navigate to Table2.
o Proceed to Table details
o Enter the ‘FName’ for the TTL attribute.
o Click Continue.

Further, you must:


 Take a screenshot of each step implemented to determine and configure time-to-live (TTL) on
data objects according to business requirements and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to determine and configure time-to-live (TTL) on data objects
according to business requirements using Template 14.

ICTPRG533- Manage data persistence using Page 104 of 175


NoSQL data stores
V01.2022
Template 14: Determine and configure time-to-live (TTL) on data objects according to business
requirements.

Optimise data queries and Retrievals using the below techniques (80-100 words)

Information in a system must have a lifespan, which is configured by setting the Time-to-Live (TTL)
on data objects. The duration of data validity is determined by this parameter before it expires. To
control data freshness, TTL is frequently used in database systems and caching techniques.
Organizations can manage how long data is cached or retained and make sure it still represents
relevance in real time by setting a TTL value. TTL configuration is necessary to maximize system
efficiency, reduce storage expenses, and preserve data accuracy by automatically deleting or
refreshing out-of-date information within the designated period of time.

ICTPRG533- Manage data persistence using Page 105 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 106 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 14
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Logged in to AWS Management Console


 
b) Navigated to Table2.
 
c) Proceeded to Table details
 
d) Entered the ‘FName’ for TTL attribute.
 
e) Clicked Continue.
 
f) Determined and Configured time-to-live
(TTL) on data objects  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

Activity 15: Research and select required API client for interacting with NoSQL data store according to
business requirements.

ICTPRG533- Manage data persistence using Page 107 of 175


NoSQL data stores
V01.2022
This part of the activity requires you to research and select the required API client for interacting with
the NoSQL data store according to business requirements and document the outcomes using ‘Template
15’.

Description of the activity

This activity is a continuation of Activity 14.

This activity requires you to research and select the required API client for interacting with the NoSQL
datastore selected in “Activity 6” according to business requirements based on the information provided
in the case study.

This activity requires you to use an AWS account with the DynamoDB service.

To do so, you need to:

• Research API client for interacting with NoSQL datastore:


o Java
o JavaScript in the browser
o .NET
o Node.js
o PH
o Python
o Ruby
o C++
o G
o Android
o iOS
• Select the required API Client ‘AWS SDK for Python’.

Note: Before you can use the AWS SDKs with DynamoDB, you must get an AWS access key ID and
secret access key.

Further, you must:


 Take a screenshot of each step implemented to research and select the required API client for
interacting with the NoSQL datastore selected and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to research and select the required API client for interacting
with the NoSQL datastore selected using Template 15.

ICTPRG533- Manage data persistence using Page 108 of 175


NoSQL data stores
V01.2022
Template 15: Research and select required API client for interacting with NoSQL data store according to
business requirements.

Research and select required API client for interacting with NoSQL data store (80-100
words)

Use a well-known third-party client that supports the particular NoSQL database or the official API
client offered by the database vendor to communicate with a NoSQL data store. The official MongoDB
Atlas Data API client, for instance, is available from MongoDB. The AWS SDKs, which include boto3
for Python, are provided by DynamoDB. DataStax drivers exist for Cassandra. Based on your
programming language and the NoSQL database's compatibility, select the API client. Ascertain that
the client offers the functionality, community support, and documentation required for a smooth
integration and productive engagement with the selected NoSQL data store.

ICTPRG533- Manage data persistence using Page 109 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 15
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Researched API client for interacting


with NoSQL datastore:  
• Java
• JavaScript in the browser
• .NET
• Node.js
• PH
• Python
• Ruby
• C++
• G
• Android
• iOS

b) Select the required API Client ‘AWS SDK


for Python’.  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 110 of 175


NoSQL data stores
V01.2022
Activity 16: Substantiate and connect API client to NoSQL data store instance.

This part of the activity requires you to substantiate and connect the API client to the NoSQL data store
instance and document the outcomes using ‘Template 16’.

Description of the activity

This activity is a continuation of Activity 15.

This activity requires you to substantiate and connect the API client to the NoSQL data store instance.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Substantiate and connect API client to NoSQL data store instance using the below steps:
o Install or update Python
o Use the AWS Common Runtime (CRT)
o Complete AWS configuration

Further, you must:


 Take a screenshot of each step implemented to substantiate and connect the API client to the
NoSQL data store instance and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to substantiate and connect the API client to the NoSQL data
store instance using Template 16.

ICTPRG533- Manage data persistence using Page 111 of 175


NoSQL data stores
V01.2022
Template 16: Substantiate and connect API client to NoSQL data store instance.

Substantiate and connect API client to NoSQL data store instance (80-100 words)

Install or update Python

Visit the official Python website, download the most recent version, and launch the installer to begin
installing Python. During installation on Windows, be sure to tick "Add Python to PATH". Use
Homebrew (brew install python) on macOS. Use the package manager on Linux (sudo apt install
python3). Use python --version or python3 -V to confirm the installation.

Use the AWS Common Runtime (CRT)

AWS Common Runtime (CRT) is a suite of libraries for building cross-platform apps using AWS
services. It offers similar APIs for various programming languages, making it easy for developers to
interact with AWS. This improves the development and deployment process overall by guaranteeing
effective and dependable communication between applications and AWS services

Complete AWS configuration

Establishing Identity and Access Management (IAM) users, configuring security settings, generating
and configuring Amazon S3 buckets, obtaining access keys, and creating an AWS account are all
necessary steps in configuring AWS. Use the AWS SDKs or CLI to gain programmatic access. For
maximum security and effectiveness, evaluate and manage setups on a regular basis.

ICTPRG533- Manage data persistence using Page 112 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 113 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 16
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Substantiate and connect API client to


NoSQL data store instance.  
b) Installed or updated Python
 
c) Used the AWS Common Runtime (CRT)
 
d) Completed AWS configuration
 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 114 of 175


NoSQL data stores
V01.2022
Activity 17: Insert a single data object into NoSQL datastore using the selected client application.

This part of the activity requires you to insert a single data object into NoSQL datastore using a
selected client application and document the outcomes using ‘Template 17’.

Description of the activity

This activity is a continuation of Activity 16.

This activity requires you to insert a single data object into a NoSQL datastore using the selected client
application.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.

To do so, you need to:

• Launch AWS CLI and accessed DynamoDB with API Client access keys.
• Create/Insert a New Item/data Object into ‘Table2’ by taking reference from
“Sampledata/docx”.
• Add the Code to python (.py) file.
• Upload the file using the AWS CLI.

, you must:
• Take a screenshot of each step implemented to insert a single data object into a NoSQL
datastore and submit it to the trainer/assessor via e-mail.
• Document the steps implemented to insert a single data object into a NoSQL datastore using
Template 17.

ICTPRG533- Manage data persistence using Page 115 of 175


NoSQL data stores
V01.2022
Template 17: Insert a single data object into NoSQL datastore using the selected client application.

Insert single data object into NoSQL datastore using a selected client application (80-100
words)

Create/Insert a New Item/data Object

Connect to the datastore using the NoSQL client application of your choice. Make a new data object
with the fields and values that you want. To enter the data object into the datastore, use the
command-line interface (CLI) provided by the client application. Verify that the data complies with
the database's specified schema or structure. Check for any error warnings or use the proper queries
to retrieve the entered data to confirm that the insertion was successful. Finish the insertion
procedure by committing the modifications to keep the new data in the NoSQL datastore.

Performance Criteria/Performance Checklist: Activity 17


ICTPRG533- Manage data persistence using Page 116 of 175
NoSQL data stores
V01.2022
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Created/Inserted a New Item/data
Object into ‘Table2’ by taking reference  
from “Sampledata/docx”.

c) Added the Code to python (.py) file.


 
d) Uploaded the file using the AWS CLI.
 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 117 of 175


NoSQL data stores
V01.2022
Activity 18: Insert multiple items in a single operation.

This part of the activity requires you to insert multiple items in a single operation and document the
outcomes using ‘Template 18’.

Description of the activity

This activity is a continuation of Activity 17.

This activity requires you to insert multiple items in a single operation from “sampledata.docx” file.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Create a JSON file and write code to insert multiple items for ‘Table3’ by taking reference from
“sampledata.docx”.
• Link the JSON file into python code.
• Run the python code on AWS CLI.

Further, you must:


 Take a screenshot of each step implemented to insert multiple items in a single operation and
submit it to the trainer/assessor via e-mail.
 Document the steps implemented to insert multiple items in a single operation using Template
18.

ICTPRG533- Manage data persistence using Page 118 of 175


NoSQL data stores
V01.2022
Template 18: Insert multiple items in a single operation.

Insert multiple items in a single operation (80-100 words)

Create a JSON file in which multiple items will write.


{
"Table3": [
{
"id": "025",
"FName": "Diana",
"LName": johnson
},
{
"id": "023",
"FName": "Caleb",
"LName": Peter },
{
"id": "027",
"FName": "Ria",
}
]
}

Link the JSON file into python code example:

import json

json_file_path = 'Table3.json'

with open(json_file_path, 'r') as file:


data = json.load(file)

Table3 = data['Table3']

for Table3 in Table3:


print(f"id: {Table3['id']}")
print(f"FName: {Table3['FName']}")
print(f"LName: {Table3['LName']}")
print("\n")

ICTPRG533- Manage data persistence using Page 119 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 18
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Created a JSON file and written code to
insert multiple items for ‘Table3’ by taking  
reference from “sampledata.docx”.

c) Linked the JSON file into python code.


 
d) Ran the python code on AWS CLI.
 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 120 of 175


NoSQL data stores
V01.2022
Activity 19: Use query and select a single object.

This part of the activity requires you to use query and select a single object, and document the
outcomes using ‘Template 19’.

Description of the activity

This activity is a continuation of Activity 18.

This activity requires you to use a query and select a single object.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with DynamoDB query to select the single object from ‘Table3’.
• Run the python code on AWS CLI.

Further, you must:


 Take a screenshot of each step implemented to use query and select a single object and submit
it to the trainer/assessor via e-mail.
 Document the steps implemented to use query and select a single object using Template 19.

ICTPRG533- Manage data persistence using Page 121 of 175


NoSQL data stores
V01.2022
Template 19: Use query and select a single object.

Use query and select a single object (80-100 words)

python code with DynamoDB query example:

import boto3
from boto3.dynamodb.conditions import Key, Attr

aws_access_key_id = 'your-access-key-id'
aws_secret_access_key = 'your-secret-access-key'
region_name = 'us-west-2'

table_name = 'Table3'

dynamodb = boto3.resource('dynamodb',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=us-west-2)
table = dynamodb.Table(Table3)
response = table.query(
KeyConditionExpression=Key('FName').eq('Diana')
)

items = response['Items']
for item in items:
print(item)

ICTPRG533- Manage data persistence using Page 122 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 19
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  

b) Written python code with DynamoDB  


query to select the single object from
‘Table3’.

c) Ran the python code on AWS CLI.


 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 123 of 175


NoSQL data stores
V01.2022
Activity 20: Use query and retrieve multiple objects in batch.

This part of the activity requires you to use query and retrieve multiple objects in batch and document
the outcomes using ‘Template 20’.

Description of the activity

This activity is a continuation of Activity 19.

This activity requires you to use queries and retrieve multiple objects in a batch.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with DynamoDB query to select multiple objects from ‘Table3’ in batch
• Run the python code on AWS CLI.

Further, you must:


 Take a screenshot of each step implemented to use query and retrieve multiple objects in batch
and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to use query and retrieve multiple objects in batch using
Template 20.

ICTPRG533- Manage data persistence using Page 124 of 175


NoSQL data stores
V01.2022
Template 20: Use query and retrieve multiple objects in batch.

Use query and retrieve multiple objects in batch (80-100 words)

Python code with DynamoDB query to select multiple objects in batch example:

import boto3
from boto3.dynamodb.conditions import Key

aws_access_key_id = 'your-access-key-id'
aws_secret_access_key = 'your-secret-access-key'
region_name = 'us-west-2'
table_name = 'Table3'

dynamodb=boto3.resource('dynamodb',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=us-west-2)

table = dynamodb.Table(table_name)
response = table.batch_get_item(
RequestItems={
Table3: {
'Keys': [
{'FName': 'Diana'},
{'LName': 'johnson'},
],
}
}
)
selected_items = response['Responses'].get(table_name, [])

for item in selected_items:


print(item)

ICTPRG533- Manage data persistence using Page 125 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 20
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Written python code with DynamoDB
query to select multiple objects from  
‘Table3’ in batch

c) Ran the python code on AWS CLI.


 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 126 of 175


NoSQL data stores
V01.2022
Activity 21: Perform query against the index.

This part of the activity requires you to perform a query against the index and document the outcomes
using ‘Template 21’.

Description of the activity

This activity is a continuation of Activity 20.

This activity requires you to perform a query against the index.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with DynamoDB query against the index for ‘Table3’ in python (.py) file.
• Run the python (.py) file on AWS CLI.

Further, you must:


 Take a screenshot of each step implemented to perform a query against the index and submit it
to the trainer/assessor via e-mail.
 Document the steps implemented to perform a query against the index using Template 21.

ICTPRG533- Manage data persistence using Page 127 of 175


NoSQL data stores
V01.2022
Template 21: Perform a query against the index.

Perform query against the index (80-100 words)

Python code with DynamoDB query against index example:

{
TableName: "Table3",
IndexName: "AgeAndIdIndex",
KeyConditionExpression: "Genre = :genre",
ExpressionAttributeValues: {
":genre": "Rock"
},
}; id:
{
TableName: "Table3",
IndexName: "id-Index",
KeyConditionExpression: "Genre = :genre and id < :id",
ExpressionAttributeValues: {
":id": "024",
":Age": 30
},
ProjectionExpression: "Id,Age"
};

ICTPRG533- Manage data persistence using Page 128 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 21
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Written python code with DynamoDB
query against index for ‘Table3’ in python  
(.py) file.

c) Ran the python (.py) file on AWS CLI.


 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 129 of 175


NoSQL data stores
V01.2022
Activity 22: Perform query to select required attributes and project results.

This part of the activity requires you to perform a query to select required attributes and project results
and document the outcomes using ‘Template 22’.

Description of the activity

This activity is a continuation of Activity 21.

This activity requires you to perform a query to select the required attributes and project results.

This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with the query to select required attributes and project results from ‘Table3’
in the python (.py) file.
• Run the python (.py) file on AWS CLI.

Further, you must:


 Take a screenshot of each step implemented to perform a query to select the required attributes
and project results and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to perform a query to select the required attributes and
project results using Template 22.

ICTPRG533- Manage data persistence using Page 130 of 175


NoSQL data stores
V01.2022
Template 22: Perform query to select required attributes and project results.

Perform query to select required attributes and project results (80-100 words)

Python code with the query to select required attributes and project results example:
{
"TableName": "Table3",
"IndexName": "id-Index",
"Limit": 3,
"ConsistentRead": true,
"ProjectionExpression": "Id, PostedBy, ReplyDateTime",
"KeyConditionExpression": "Id = :v1 AND PostedBy BETWEEN :v2a AND :v2b",
"ExpressionAttributeValues": {
":v1": {"S": "Amazon DynamoDB#DynamoDB Thread 1"},
":v2a": {"S": "User A"},
":v2b": {"S": "User C"}

},
"ReturnConsumedCapacity": "TOTAL"
}

ICTPRG533- Manage data persistence using Page 131 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 22
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Written python code with the query to
select required attributes and project  
results from ‘Table3’ in python (.py) file.

c) Ran the python (.py) file on AWS CLI.


 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 132 of 175


NoSQL data stores
V01.2022
Activity 23: Delete single and multiple objects according to business requirements.

This part of the activity requires you to delete single and multiple objects according to business
requirements and document the outcomes using ‘Template 23’.

Description of the activity

This activity is a continuation of Activity 22.

This activity requires you to delete single and multiple objects according to business requirements
based on the information provided in the case study.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with the query to delete the single object from ‘Table3’ in the python (.py)
file.
• Run the python (.py) file on AWS CLI.
• Write python code with the query to delete multiple objects from ‘Table3’ in the python (.py)
file.
• Run the python (.py) file on AWS CLI.

Further, you must:


 Take a screenshot of each step implemented to delete single and multiple objects and submit it
to the trainer/assessor via e-mail.
 Document the steps implemented to delete single and multiple objects using Template 23.

ICTPRG533- Manage data persistence using Page 133 of 175


NoSQL data stores
V01.2022
Template 23: Delete single and multiple objects according to business requirements.

Delete single and multiple objects according to business requirements (80-100 words)

Python code with query to delete single object example:


import Sqlite3

sqlite3.connect('example.db') = conn
conn.cursor() = cursor
pointer.perform("INSERT INTO users (name, age) VALUES (?,?)", ('John Smith', 26))
pointer.perform("INSERT INTO users (name, age) VALUES (?,?)", ('John Smith', 26))
"INSERT INTO users (name, age) VALUES (?,?)", (‘Diana Johnson', 24))
pointer.conn.commit() execute("DELETE FROM users WHERE id=?", (object_id,))
remove_one_object_by_id (021)

cursor.fetchall() = remaining_users

For each user in remaining_users: print("Remaining Users:").


print(user)

Python code with query to delete multiple objects example:

import Sqlite3

sqlite3.connect('example.db') = conn
conn.cursor() = cursor

The code is executed by cursor.execute("INSERT INTO users (name, age) VALUES (?,?)", 'John
smith', 26)) and 'celeb peter', 30)).
The code is executed by cursor.execute("INSERT INTO users (name, age) VALUES (?,?)", 'Diana
Johnson', 24)) and 'narrin Brown', 30)).

pointer.conn.commit() execute("DELETE FROM users WHERE age >?", (age_threshold,))

remove_many_objects_by_age (30)

ICTPRG533- Manage data persistence using Page 134 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 23
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Written python code with the query to
delete the single object in the python (.py)  
file.

c) Ran the python (.py) file on AWS CLI.


 
d) Written python code with the query to
delete multiple objects in the python (.py)  
file.

e) Ran the python (.py) file on AWS CLI.


 

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 135 of 175


NoSQL data stores
V01.2022
Activity 24: Update single and multiple objects according to business requirements.

This part of the activity requires you to update single and multiple objects according to business
requirements and document the outcomes using ‘Template 24’.

Description of the activity

This activity is a continuation of Activity 23.

This activity requires you to update single and multiple objects according to business requirements
based on the information provided in the case study.

This activity requires you to use an AWS account with DynamoDB.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Update single item in ‘Table3’ using SET, REMOVE, ADD, or DELETE keyword.
• Update multiple items in ‘Table3’ using batch operation.

Further, you must:


 Take a screenshot of each step implemented to update single and multiple objects according to
business requirements and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to update single and multiple objects according to business
requirements using Template 24.

ICTPRG533- Manage data persistence using Page 136 of 175


NoSQL data stores
V01.2022
Template 24: Update single and multiple objects according to business requirements.

Update single and multiple objects according to business requirements (80-100 words)

Updating single item in DynamoDB example:


bring in boto3.

'your-access-key-id' is what aws_access_key_id =


'your-secret-access-key' is what aws_secret_access_key =.

dynamodb=boto3.resource('dynamodb',
region_name=aws_region,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)

expression_attribute_values = {':new_name': 'UpdatedUserName'} update_expression = 'SET


UserName = :new_name'

Respondence = table.update_item( Key=item_key, UpdateExpression=update_expression,


ExpressionAttributeValues=expression_attribute_values, ReturnValues='UPDATED_NEW')

print(f"Updated item: {response['Attributes']}")

Updating multiple items in DynamoDB example:

import boto3

aws_access_key_id = 'your-access-key-id'
aws_secret_access_key = 'your-secret-access-key'

aws_region = 'us-west-2'
table_name = 'Table3'

dynamodb=boto3.resource('dynamodb',
region_name=aws_region,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
table = dynamodb.Table(Table3)
items_to_update = [
{'Id': '025', 'NewAttribute': 'Diana '},
{'Id': '023', 'NewAttribute': 'caleb '},
]
def update_multiple_items(items):
for item in items:

ICTPRG533- Manage data persistence using Page 137 of 175


NoSQL data stores
V01.2022
item_key = {'025': item['023']}
update_expression = 'SET Diana = :ria’'
expression_attribute_values = {':ria': item['Fname']}
table.update_item(
025=FName_ria,
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values,
ReturnValues='UPDATED_NEW'
)

update_multiple_items(items_to_update)
for item in items_to_update:
response = table.get_item(Key={'023': item['FName']})
updated_item = response.get('Item', {})
print(f"Updated item for UserId {item['023']}: {updated_item}")

ICTPRG533- Manage data persistence using Page 138 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 24
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and accessed


DynamoDB with API Client access keys.  
b) Updated a single item in ‘Table3’ using
SET, REMOVE, ADD, or DELETE keyword.  
c) Updated multiple items in ‘Table3’ using
batch operation.  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 139 of 175


NoSQL data stores
V01.2022
Activity 25: Persist objects with different data types.

This part of the activity requires you to persist objects with different data types and document the
outcomes using ‘Template 25’.

Description of the activity

This activity is a continuation of Activity 24.

This activity requires you to persist objects with different data types.

This activity requires you to use an AWS account with DynamoDB and Amazon SDK .NET services.

To do so, you need to:

• Launch AWS CLI and access DynamoDB with API Client access keys.
• Map objects, i.e. data items of table ‘3’ with DynamoDB Using the Amazon SDK for .NET Object
Persistence Model.

Further, you must:


 Take a screenshot of each step implemented to persist objects with different data types and
submit it to the trainer/assessor via e-mail.
 Document the steps implemented to persist objects with different data types using Template 25.

ICTPRG533- Manage data persistence using Page 140 of 175


NoSQL data stores
V01.2022
Template 25: Persist objects with different data types.

Persist objects with different data types (80-100 words)

Mapping Data i.e data items with DynamoDB Using the Amazon SDK for .NET Object Persistence
Model

Selecting an appropriate database system and carefully designing your data model are essential when
storing objects with various data types. Create a table in a relational database with columns that
represent various data kinds. For smooth integration, use an Object-Relational Mapping (ORM)
framework. Each attribute in an item in a NoSQL database, such as DynamoDB, can have a distinct
data type, providing flexibility. For storing purposes, serialize things into a standard format, then
deserialize them upon retrieval. Take into account data validation and confirm that it complies with
the specifications and capabilities of the database.

ICTPRG533- Manage data persistence using Page 141 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 25
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Launched AWS CLI and access


DynamoDB with API Client access keys.  
b) Mapped objects i.e data items with
DynamoDB Using the Amazon SDK for .NET  
Object Persistence Model.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 142 of 175


NoSQL data stores
V01.2022
Activity 26: Configure and confirm change event triggers and notifications according to business needs.

This part of the activity requires you to configure and confirm change event triggers and notifications
according to business needs and document the outcomes using ‘Template 26’.

Description of the activity

This activity is a continuation of Activity 25.

This activity requires you to configure and confirm change event triggers and notifications according to
business needs based on the information provided in the case study.

This activity requires you to use an AWS account with DynamoDB and AWS Lambda services.

To do so, you need to:

• Configure and confirm change event triggers and notifications on your Supervisor’s email
according to business needs using the below steps:
o Step 1: Enable Stream for ‘Table3’.
o Step 2: Create a Lambda Execution Role
o Step 3: Create an Amazon SNS Topic

Further, you must:


 Take a screenshot of each step implemented to configure and confirm change event triggers and
notifications and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to configure and confirm change event triggers and
notifications using Template 26.

ICTPRG533- Manage data persistence using Page 143 of 175


NoSQL data stores
V01.2022
Template 26: Configure and confirm change event triggers and notifications according to business
needs.

Configure and confirm change event triggers and notifications according to business needs
(80-100 words)

Step 1: Enable Stream for ‘Table3’.

Go to the AWS DynamoDB console, choose the table, click the "Overview" tab, and then click
"Manage DynamoDB Streams" to activate and set up the stream. This will enable DynamoDB Streams
for that particular table.

ICTPRG533- Manage data persistence using Page 144 of 175


NoSQL data stores
V01.2022
Step 2: Create a Lambda Execution Role

Using the IAM console, choose "Roles," create a new role, attach the
"AWSLambdaBasicExecutionRole" policy, and add inline policies as necessary to create a Lambda
execution role in AWS.

ICTPRG533- Manage data persistence using Page 145 of 175


NoSQL data stores
V01.2022
Step 3: Create an Amazon SNS Topic

ICTPRG533- Manage data persistence using Page 146 of 175


NoSQL data stores
V01.2022
Go to the SNS console in AWS, choose "Topics," click "Create topic," provide the topic's details, and
confirm the creation to create an Amazon SNS topic.

ICTPRG533- Manage data persistence using Page 147 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 148 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 26
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Configure and confirm change event


triggers and notifications on your  
Supervisor’s email according to business
needs.

b) Enable Stream for ‘Table3’.


 
c) Created a Lambda Execution Role
 

d) Created an Amazon SNS Topic  

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 149 of 175


NoSQL data stores
V01.2022
Activity 27: Test, fix and ensure responses and trigger notifications work according to business
requirements.

This part of the activity requires you to test, fix and ensure responses and trigger notifications work
according to business requirements and document the outcomes using ‘Template 27’.

Description of the activity

This activity is a continuation of Activity 26.

This activity requires you to test, fix and ensure responses and trigger notifications work according to
business requirements based on the information provided in the case study.

This activity requires you to use an AWS account with DynamoDB, CloudWatch and AWS Lambda
services.

To do so, you need to:

• Test and fix responses and trigger notifications work according to business requirements using
the below steps:
o Step 1: Create and Test a Lambda Function
o Step 2: Create and Test a Trigger
• Ensure responses and trigger notifications work by confirming it from Supervisor’s email.

Further, you must:


 Take a screenshot of each step implemented to test, fix and ensure responses and trigger
notifications work according to business requirements and submit it to the trainer/assessor via
e-mail.
 Document the steps implemented to test, fix and ensure responses and trigger notifications
work according to business requirements using Template 27.

ICTPRG533- Manage data persistence using Page 150 of 175


NoSQL data stores
V01.2022
Template 27: Test, fix and ensure responses and trigger notifications work according to business
requirements.

Test, fix and ensure responses and trigger notifications work according to business
requirements (80-100 words)

Step 1: Create and Test a Lambda Function

Using the Lambda console, choose "Create function," establish triggers, runtime and code
parameters, and define environment variables to create an AWS Lambda function. Make sure the
function works as intended using sample or custom events by testing it with the "Test" button or AWS
CLI commands.

ICTPRG533- Manage data persistence using Page 151 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 152 of 175
NoSQL data stores
V01.2022
Step 2: Create and Test a Trigger
By setting up an event source (such as an S3 bucket or DynamoDB stream) to call a Lambda
function, you may create a trigger in AWS. Configure the event source settings, set up the trigger in
the AWS Lambda console, and test the trigger using dummy events to make sure the function is
invoked correctly.

ICTPRG533- Manage data persistence using Page 153 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 27
ICTPRG533- Manage data persistence using Page 154 of 175
NoSQL data stores
V01.2022
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Tested and fixed responses and trigger


notifications work according to business  
requirements.

b) Created and Tested a Lambda Function


 
c) Created and Tested a Trigger
 

d) Ensured responses and trigger  


notifications work by confirming it from
Supervisor’s email.

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 155 of 175


NoSQL data stores
V01.2022
Activity 28: Review and confirm data is encrypted and authorisation and authentications are active
according to user and client access requirements.

This part of the activity requires you to review and confirm data is encrypted and authorisation and
authentications are active according to the user and client access requirements, and document the
outcomes using ‘Template 28’.

Description of the activity

This activity is a continuation of Activity 27.

This activity requires you to review and confirm data is encrypted and authorisation and authentications
are active according to the user and client access requirements based on the information provided in
the case study.

This activity requires you to use an AWS account with DynamoDB.

To do so, you need to:

• Review and confirm data in ‘Table3’ is encrypted according to user and client access
requirements.
• Review and confirm authorisation and authentications are active for ‘Table3’ according to the
user and client access requirements by following the below practices:
o Use IAM roles to authenticate access to DynamoDB
o Use IAM policies for DynamoDB based authorisation
o Use IAM policy conditions for fine-grained access control
o Use a VPC endpoint and policies to access DynamoDB
o Consider client-side encryption

Further, you must:


 Take a screenshot of each step implemented to review and confirm data is encrypted and
authorisation and authentications are active according to the user and client access
requirements, and submit it to the trainer/assessor via e-mail.
 Document the steps implemented to review and confirm data is encrypted and authorisation and
authentications are active according to the user and client access requirements using Template
28.

ICTPRG533- Manage data persistence using Page 156 of 175


NoSQL data stores
V01.2022
Template 28: Review and confirm data is encrypted and authorisation and authentications are active
according to user and client access requirements.

Review and confirm data is encrypted ad according to user and client access requirements

Examine data encryption methods to make sure they adhere to user and client access specifications.
Verify that the right access controls, key management procedures, and encryption protocols are being
used. Maintain regular evaluation and verification of security procedures to ensure data security and
compliance with regulatory requirements as well as organizational rules.

ICTPRG533- Manage data persistence using Page 157 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 158 of 175
NoSQL data stores
V01.2022
Authorisation and authentications are active for ‘Table3’ according to user and client

ICTPRG533- Manage data persistence using Page 159 of 175


NoSQL data stores
V01.2022
access requirements ( 100-200 words)
Use IAM roles to authenticate access to DynamoDB
Using IAM roles, you may define policies that allow the required permissions to securely access
Amazon DynamoDB. Assign these policies to IAM roles so that services and applications can take on
these responsibilities and safely access DynamoDB resources.

Use IAM policies for DynamoDB based authorisation

Use IAM policies to implement authorization based on DynamoDB. Create policies that outline what
can be done with DynamoDB resources. To ensure precise control over access permissions based on
predetermined conditions, attach these rules to IAM users, groups, or roles.

Use IAM policy conditions for fine-grained access control

Use IAM policy conditions to give AWS users fine-grained access control. To impose certain access
requirements, you can define criteria based on resource attributes, time, or IP address. This gives
you fine-grained control over permissions in your AWS environment.

Use a VPC endpoint and policies to access DynamoDB

Use a VPC endpoint for DynamoDB inside your Amazon VPC to improve security. In order to manage
access and guarantee fine-grained permissions, implement IAM policies. With this configuration, your
VPC and DynamoDB can communicate securely and privately without being visible to the internet.

Consider client-side encryption


Add further security to your data by using client-side encryption. Before sending sensitive data to the
server, encrypt it on the client end. Make that the server side decryption and key management are
secure for allowed access.

ICTPRG533- Manage data persistence using Page 160 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 28
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Reviewed and confirmed data in ‘Table3’


is encrypted according to user and client  
access requirements.

b) Reviewed and confirmed authorisation


and authentications are active for ‘Table3’  
according to user and client access
requirements by following the below
practices:
• Use IAM roles to authenticate access
to DynamoDB
• Use IAM policies for DynamoDB
based authorisation
• Use IAM policy conditions for fine-
grained access control
• Use a VPC endpoint and policies to
access DynamoDB
• Consider client-side encryption

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 161 of 175


NoSQL data stores
V01.2022
Activity 29: Test and fix data persistence process according to business requirements.

This part of the activity requires you to test and fix the data persistence process according to business
requirements and document the outcomes using ‘Template 29’.

Description of the activity

This activity is a continuation of Activity 28.

This activity requires you to test and fix the data persistence process according to business
requirements based on the information provided in the case study.

This activity requires you to use an AWS account with DynamoDB.

To do so, you need to:

• Test and fix the below data persistence processes by using DynamoDB with AWS:
o Data conversion
o Data migration
o Consistency model
o Security – encryption
o Network security – VPC endpoint for DynamoDB
o Performance – throughput and auto-scaling
o Required performance – microseconds versus milliseconds

Further, you must:


 Take a screenshot of each step implemented to test and fix the data persistence process and
submit it to the trainer/assessor via e-mail.
 Document the steps implemented to test and fix the data persistence process according to
business requirements using Template 29.

ICTPRG533- Manage data persistence using Page 162 of 175


NoSQL data stores
V01.2022
Template 29: Test and fix data persistence process according to business requirements.

Test and fix data persistence process according to business requirements ()

Data conversion
Verify mapping, assure data integrity, and conduct tests using a variety of data samples. Perform
error handling, maximize speed, and look for missing data. Make a data backup and record the
procedure. Iteratively solve problems, retesting following each modification. For conversion, constant
monitoring and updates provide a dependable and effective data persistence procedure.

Data migration
Verify the mapping, experiment with different data sets, and guarantee data integrity while
migrating. Optimize performance, create a reliable error handling system, and backup your data.
Keep track of the procedure and use iterative problem-solving. Ongoing observation ensures a
dependable and effective data migration procedure.

Consistency model

Verify the data consistency model, conduct a variety of scenario tests, and guarantee integrity. Put
error handling into practice, boost performance, and make data backups. Keep track of the procedure
and solve problems in iterations. Constant observation preserves the integrity of the consistency
model by guaranteeing a dependable and consistent data persistence method.

Security – encryption

Check for security precautions, validate encryption implementation, and test with a variety of data
sets. Optimize encryption performance, implement strong error handling, and backup encrypted data.
Iteratively address security vulnerabilities and document the process. Strong encryption methods and
ongoing monitoring guarantee a safe and effective data persistence process.

Network security – VPC endpoint for DynamoDB


Validate the DynamoDB VPC endpoint configuration, conduct a data transfer test, and guarantee
network security. Verify that IAM roles and policies are correct. Monitor network traffic, optimize
performance, and implement strong error handling. Iteratively address security vulnerabilities and
document the process. Using DynamoDB VPC endpoints guarantees a secure data persistence process
through ongoing testing and updates.

Performance – throughput and auto-scaling


Verify auto-scaling and throughput settings while putting different workloads through testing. Adjust
provisioned throughput and auto-scaling settings to maximize performance. Put in place reliable
error-handling procedures. Continue to keep an eye on performance metrics. Keep track of
modifications and deal with problems repeatedly. This guarantees a data persistence method that is
both robust and effective, with adjustable throughput and auto-scaling capabilities.

ICTPRG533- Manage data persistence using Page 163 of 175


NoSQL data stores
V01.2022
Required performance – microseconds versus milliseconds.
Test using scenarios with millisecond and microsecond latency to validate performance requirements.
To reach predetermined performance goals, optimize the data persistence procedure. Put
performance monitoring into practice and make necessary adjustments. Keep track of modifications
and deal with problems repeatedly. Whether measured in milliseconds or microseconds, this
guarantees that the data persistence procedure continuously satisfies necessary performance
requirements.

ICTPRG533- Manage data persistence using Page 164 of 175


NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 165 of 175
NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 166 of 175
NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 167 of 175
NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 168 of 175
NoSQL data stores
V01.2022
ICTPRG533- Manage data persistence using Page 169 of 175
NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 29
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Tested and fixed the below data


persistence processes:  
• Data conversion
• Data migration
• Consistency model
• Security – encryption
• Network security – VPC endpoint for
DynamoDB
• Performance – throughput and auto-
scaling
• Required performance –
microseconds versus milliseconds

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 170 of 175


NoSQL data stores
V01.2022
Activity 30: Document and finalise work according to business requirements.

This part of the activity requires you to document and finalise work according to business requirements
and document the outcomes using ‘Template 30’.

Description of the activity

This activity is a continuation of Activity 29.

This activity requires you to document and finalise work according to business requirements based on
the information provided in the case study.

This activity requires you to use MS Word.

To do so, you need to:

• Finalise the work using the below elements and document using Template 30:
o Include the scope of the project
o Record every resource that is being used during the project
o Mention the people involved in the project
o Merging all the contents
• Document work by following the below practices:
o Priorities security
o Record and monitor the documents
o Establish a record management plan
o Review periodically
o Dispose or destroy records at the end of their lifecycle
o Ensure that the data is accurate
o Digitise physical records

ICTPRG533- Manage data persistence using Page 171 of 175


NoSQL data stores
V01.2022
Template 30: Document and finalise work according to business requirements.

Document and finalise work according to business requirements ()

 Introduction and scope:

Welcome to the documentation for our NoSQL data store! This all-inclusive manual prepares you
to perform essential tasks: formulating queries to create, update, and remove various kinds of
data. In order to improve data efficiency, also investigate the development of two strategic
indexes, comprehend the significance of defining partition and sort keys, and learn about
optimization strategies. This handbook offers developers and administrators a comprehensive
scope to efficiently handle key tasks and streamline data management.

 Configuration and Installation:

Install a code editor, the official client or SDK for the selected NoSQL database (MongoDB,
DynamoDB, etc.), and the database itself. Create a setting for development. Recognize and
prepare for sort and partition keys. Install index creation and management tools, if necessary.
Acquire login credentials. Consult official documents for rules and best practices.

 Application and illustrations:

Use the SDK or NoSQL database client to communicate with the datastore. Making queries for
adding, removing, and altering data types is one example. Create indexes to facilitate effective
querying. According to the specifications of the data model, specify the partition and sort keys. To
achieve better performance, optimize your data storage.

 Design and Architecture:

Describe the architecture of NoSQL databases with an emphasis on performance and scalability.
Determine essential elements, like databases and indexes. Provide partition and sort keys in your
data models to ensure effective storage and retrieval. For the best possible handling of read and
write operations, take into account the overall architecture of the system.

FAQs & Troubleshooting:

Investigate common problems by looking through error logs and messages. FAQs respond to
often asked questions about data optimization, index building, and query execution. To obtain
complete troubleshooting methods and supplementary FAQs, refer to the extensive
documentation or contact the NoSQL database community and forums.

ICTPRG533- Manage data persistence using Page 172 of 175


NoSQL data stores
V01.2022
Performance Criteria/Performance Checklist: Activity 30
Your task must address the following performance criteria/ performance checklist.

To be assessed as satisfactory (S) in this S N/S Trainer/Assessor to complete


assessment task the participant needs to (Comment and feedback to students)
demonstrate competency in the following
critical aspects of evidence

a) Finalised the work using the below


elements:  
• Include the scope of the project
• Record every resource that is being
used during the project
• Mention the people involved in the
project
• Merging all the contents

b) Documented work by following the


below practices:  
• Priorities security
• Record and monitor the documents
• Establish a record management plan
• Review periodically
• Dispose or destroy records at the
end of their lifecycle
• Ensure that the data is accurate
• Digitise physical records

The student’s performance was:  Not satisfactory

 Satisfactory

Feedback to student:

Student signature

Observer signature

ICTPRG533- Manage data persistence using Page 173 of 175


NoSQL data stores
V01.2022
Assessment Results Sheet
Outcome First attempt:

Outcome (make sure to tick the correct checkbox):

Satisfactory (S) or Not Satisfactory (NS)

Date: _______(day)/ _______(month)/ ____________(year)

Feedback:

Second attempt:

Outcome (make sure to tick the correct checkbox):

Satisfactory (S) or Not Satisfactory (NS)

Date: _______(day)/ _______(month)/ ____________(year)


Feedback:

Student  I declare that the answers I have provided are my own work. Where I
Declaration
have accessed information from other sources, I have provided
references and or links to my sources.
 I have kept a copy of all relevant notes and reference material that I
used as part of my submission.
 I have provided references for all sources where the information is not
my own. I understand the consequences of falsifying documentation and
plagiarism. I understand how the assessment is structured. I accept that
the work I submit may be subject to verification to establish that it is
my own.
 I understand that if I disagree with the assessment outcome, I can
appeal the assessment process, and either re-submit additional evidence

ICTPRG533- Manage data persistence using Page 174 of 175


NoSQL data stores
V01.2022
undertake gap training and or have my submission re-assessed.
 All appeal options have been explained to me.

Student Signature

Date

Trainer/Assessor
Name
Trainer/Assessor I hold:
Declaration
Vocational competencies at least to the level being delivered
Current relevant industry skills
Current knowledge and skills in VET, and undertake
Ongoing professional development in VET

I declare that I have conducted an assessment of this student’s submission.


The assessment tasks were deemed current, sufficient, valid and reliable. I
declare that I have conducted a fair, valid, reliable, and flexible assessment.
I have provided feedback to the student.

Trainer/Assessor
Signature
Date

Office Use Only Outcome of Assessment has been entered into the Student Management
System

on _________________ (insert date)

by (insert Name) __________________________________

ICTPRG533- Manage data persistence using Page 175 of 175


NoSQL data stores
V01.2022

You might also like