ICTPRG554 Student Assessment
ICTPRG554 Student Assessment
ICTPRG554 Student Assessment
Course Name:
Assessor’s
Feedback:
Assessor’s Date: / /
Signature:
Signature:
Indirect Evidence – things that someone else has observed and reported to us, e.g., third party reports
Supplementary Evidence – other things that can indicate performance, such as training records, questions,
written work, portfolios
Case Study
Observation/Demonstration
Practical Activity
Questions
Assessment must comply with the assessment methods of the training package and be conducted in accordance with
the Principles of Assessment and assessment conditions. This means the assessment must be fair, flexible, reliable
and valid.
Your ability to recognise common principles and actively use these on the job.
All of your assessment and training is provided as a positive learning tool. Your assessor will guide your learning and
provide feedback on your responses to the assessment materials until you have been deemed competent in this unit.
1.1 HOW YOU WILL BE ASSESSED
The process we follow is known as competency-based assessment. This means that evidence of your current skills
and knowledge will be measured against national standards of best practice, not against the learning you have
undertaken either recently or in the past. Some of the assessment will be concerned with how you apply your skills
and knowledge in the workplace, and some in the training room as required by each unit.
The assessment tasks have been designed to enable you to demonstrate the required skills and knowledge and
produce the critical evidence to successfully demonstrate competency at the required standard.
Your assessor will ensure that you are ready for assessment and will explain the assessment process. Your
assessment tasks will outline the evidence to be collected and how it will be collected, for example; a written activity,
case study, or demonstration and observation.
The assessor will also have determined if you have any special needs to be considered during assessment. Changes
can be made to the way assessment is undertaken to account for special needs and this is called making Reasonable
Adjustment.
What happens if your result is ‘Not Yet Competent’ for one or more assessment tasks?
Our assessment process is designed to answer the question “has the desired learning outcome been achieved yet?” If
the answer is “Not yet”, then we work with you to see how we can get there.
In the case that one or more of your assessments has been marked ‘NYC’, your trainer will provide you with the
necessary feedback and guidance, in order for you to resubmit your responses.
Assessor Responsibilities
Assessors need to be aware of their responsibilities and carry them out appropriately. To do this they need to:
Ensure that participants are assessed fairly based on the outcome of the language, literacy and numeracy
review completed at enrolment.
Ensure that all documentation is signed by the student, trainer, workplace supervisor and assessor when
units and certificates are complete, to ensure that there is no follow-up required from an administration
perspective.
When required, request the manager or supervisor to determine that the student is ‘satisfactorily’
demonstrating the requirements for each unit. ‘Satisfactorily’ means consistently meeting the standard
expected from an experienced operator.
When required, ensure supervisors and students sign off on third party assessment forms or third party
report.
For a book: Author surname, author initial Year of publication, Title of book, Publisher, City, State
You will receive an overall result of Competent or Not Yet Competent for the unit. The assessment process is
made up of a number of assessment methods. You are required to achieve a satisfactory result in each of these
to be deemed competent overall. Your assessment may include the following assessment types.
Questions All questions answered correctly Incorrect answers for one or more
questions
Answers address the question in full; Answers do not address the
referring to appropriate sources question in full. Does not refer to
from your workbook and/or appropriate or correct sources.
workplace
Third Party Report Supervisor or manager observes Could not demonstrate consistency.
work performance and confirms that Could not demonstrate the ability to
you consistently meet the standards achieve the required standard
expected from an experienced
operator
Written Activity The assessor will mark the activity Does not follow
against the detailed guidelines/instructions
guidelines/instructions
Attachments if requested are Requested supplementary items are
attached not attached
All requirements of the written Response does not address the
activity are addressed/covered. requirements in full; is missing a
response for one or more areas.
Your trainer is required to fill out the Assessment Plan Outcome records above, when:
• You have completed and submitted all the requirements for the assessment tasks for this
cluster or unit of competency.
• Your work has been reviewed and assessed by your trainer/assessor.
• You have been assessed as either satisfactory or unsatisfactory for each assessment task
within the unit of competency.
• You have been provided with relevant and detailed feedback.
Every assessment has a “Feedback to Student” section used to record the following information. Your
trainer/assessor must also ensure that all sections are filled in appropriately, such as:
3. Unit Requirements
You, the student, must read and understand all of the information in the Unit Requirements before
completing the Student Pack. If you have any questions regarding the information, see your
trainer/assessor for further information and clarification.
Written Questions
This is the first (1) assessment task you must successfully complete to be deemed
competent in this unit of competency.
The Knowledge Test is comprised of twelve (12) written questions
You must respond to all questions and submit them to your Trainer/Assessor.
You must answer all questions to the required level, e.g. provide an answer within the
required word limit, to be deemed satisfactory in this task
You will receive your feedback within two (2) weeks, and you will be notified by your
Trainer/Assessor when your results are available.
Applicable conditions:
All knowledge tests are untimed and are conducted as open book assessment (this means
you can refer to your textbook during the test).
You must read and respond to all questions.
You may handwrite/use a computer to answer the questions.
You must complete the task independently.
No marks or grades are allocated for this assessment task. The outcome of the task will be
Satisfactory or Not Satisfactory.
As you complete this assessment task, you are predominately demonstrating your written
skills and knowledge to your trainer/assessor.
Where a student’s answers are deemed not satisfactory after the first attempt, a
resubmission attempt will be allowed.
The student may speak to their trainer/assessor if they have any difficulty in completing
this task and require reasonable adjustments.
For more information, please refer to the Training Organisation’s Student Handbook.
Location:
Your trainer/assessor will provide you with further information regarding the location for
completing this assessment task.
This assessment task is designed to evaluate student’s knowledge essential to Manage data
persistence using noSQL data stores of contexts and industry settings and knowledge regarding
the following:
Knowledge of benefits and functions of noSQL database and schema free data persistence,
as well as traditional relational data models
Knowledge of methods and different features and functions between scaling out and scaling
up (horizontal and vertical)
Knowledge of language used in required programming language for noSQL applications
Knowledge of partitioning in a noSQL environment and its related terms
Knowledge of functions and features for time-to-live (TTL) requirements
Knowledge of authorisation and authentications procedures and levels of responsibility
according to client access requirements
Knowledge of distribution of data storage across partitions
Knowledge of debugging and testing methodologies and techniques
Knowledge of functions and features of sort keys in noSQL storage
Knowledge of features of transport encryptions, authentication and authorisation
Knowledge of different noSQL data store formats, including:
o key value
o document based
o column based
o graph based
Task instructions
This is an individual assessment.
To ensure your responses are satisfactory, consult a range of learning resources and other
information such as handouts, textbooks, learner resources etc.
To be assessed as Satisfactory in this assessment task, all questions must be answered
correctly.
Q1: Answer the following questions regarding benefits and functions of Satisfactory
NoSQL database and schema-free data persistence, as well as response
traditional relational data models:
Yes No
1.1. Explain the functions and benefits of the NoSQL database. Write
your answer in 200-250 words.
Ans 1.1: NoSQL databases, or "Not Only SQL" databases, are a category of database management systems that diverge
from traditional relational databases in terms of data storage and retrieval. The functions and benefits of NoSQL
databases can be elucidated as follows:
2. **Flexibility and Schema-less Design**: Unlike traditional relational databases with a predefined schema,
NoSQL databases embrace a flexible, schema-less approach. This flexibility enables developers to adapt to
evolving data requirements without the need for significant alterations to the database structure.
3. **High Performance**: NoSQL databases are often optimized for specific use cases,
leadingtoenhancedperformanceincertainscenarios.Theycanefficientlyhandletaskslike real-time
analytics, content management, and other applications that demand rapid data access.
4. **Support for Unstructured Data**: NoSQL databases are adept at managing unstructured or
semi-structured data, such as JSON or XML documents. This makes them suitable for
applications dealing with diverse data types like social media posts, user- generated
content, and sensor data.
6. **Cost-Efficiency**: NoSQLdatabasescanbemorecost-effectiveforcertainapplications,
particularly those with massive amounts of data and dynamic scaling requirements. Their
ability to run on commodity hardware and distributed environments contributes to cost
savings.
Inconclusion,NoSQLdatabasesprovidearangeofbenefits,includingscalability,flexibility,
highperformance,supportforunstructureddata,easeofdevelopment,andcost-efficiency. These features make them
well-suited for modern applications with diverse and evolving data needs.
References:
- Stonebraker, M., & Cattell, R. (2011). 10 Rules for Scalable Performance in Simple Operation NoSQL Data Stores.
ACM SIGMOD Record, 40(2), 10–18. doi: 10.1145/1978915.1978919.
Ans 1.2: Schema-free data persistence, a characteristic often associated with NoSQL databases, provides several
benefits and functions that cater to the needs of modern, dynamic applications.
4. **Improved Performance**: The absence of a fixed schema can contribute to improved query
performance. Schema-free databases often store data in a way that is optimized for
retrieval,reducingtheneedforcomplexjoinsandenablingfasterdataaccess,especiallyin scenarios where
the data structure is hierarchical or nested.
In conclusion, schema-free data persistence provides benefits such as flexibility, easy data evolution, simplified
development, improved performance, and support for unstructured data. These advantages make it an ideal
choice for applications where adaptability, speed, and efficient handling of diverse data are paramount.
Reference:
Ans 1.3:Traditional relational data models offer several benefits and functions that have made them foundational in
database management systems (RDBMS). These include:
3. **Structured Query Language (SQL)**: The standardized query language SQL is a powerful tool for
interacting with relational databases. It allows for efficient data manipulation, retrieval, and management,
offering a common interface for users (Date, 2011).
4. **Complex Query Support and Joins**: Relational databases excel in handling complex queries and joining
tables, facilitating sophisticated data analysis and reporting (Codd, 1970).
7. **Data Security and Access Control**: RDBMS offer robust security features, including access control
mechanisms that allow administrators to define user roles and permissions, ensuring data security (Ramakrishnan
& Gehrke, 2003).
These benefits make traditional relational data models well-suited for applications wheredata consistency,
integrity, and relational structure are critical.
References:
- Date,C.J.(2004).AnIntroductiontoDatabaseSystems.Addison-Wesley.
- Date, C. J. (2011). SQL and Relational Theory: How to Write Accurate SQL Code. O'Reilly Media.
- Gray, J., & Reuter, A. (1993). Transaction Processing: Concepts and Techniques. Morgan Kaufmann.
- O'Brien,J.A.,&Marakas,G.M.(2010).ManagementInformationSystems.McGraw- Hill/Irwin.
Ramakrishnan,R.,&Gehrke,J.(2003).DatabaseManagementSystems.McGraw- Hill.
1. **ScalingOut(HorizontalScaling):**
- **Method:** In horizontal scaling, the focus is on adding more hardware resources, such as servers or nodes, to
distribute the load across multiple machines. This approach is often associated with the use of clusters and distributed
architectures.
- **FeaturesandFunctions:**
- **High Availability:** Horizontal scaling enhances system reliability by distributing workloads across multiple
servers. If one server fails, others can take over, ensuring continuous service availability.
- **Linear Scalability:**Asmoreresourcesareadded,thesystem'scapacityincreaseslinearly.
Thismeansthatperformancescalesproportionallywiththeadditionofeachnewnodeorserver.
- **Distributed Databases:** Horizontal scaling is often associated with NoSQL databases and distributed storage
systems that can easily distribute data across multiple nodes.
2. **ScalingUp(VerticalScaling):**
- **FeaturesandFunctions:**
- **Single-System Management:** Managing a single, larger server is generally simpler than dealing with a
distributed system. This can lead to easier maintenance and troubleshooting.
- **Quick Implementation:** Upgrading the resources of a single machine can be a quicker solution compared to
adding new servers and configuring a distributed environment.
- **Compatibility:** Some legacy applications may be designed to work more efficiently on a single, powerful
ICTPRG533- Manage data persistence using Page 17 of 175
NoSQL data stores
V01.2022
machine, making vertical scaling a more compatible choice.
References:
- Tanenbaum,A.S.,&VanSteen,M.(2007).DistributedSystems:PrinciplesandParadigms.Prentice Hall.
Leavitt,N.(2009).WillNoSQLDatabasesLiveUptoTheirPromise?Computer,42(2),12-14. doi:10.1109/MC.2009.51
Q3: Answer the following questions regarding the language used in the Satisfactory
required programming language for NoSQL applications: response
3.1 Describe language used in required programming language for NoSQL Yes No
applications. Write your answer in 200-250 words
Ans3.1:ThelanguageusedinNoSQLdatabasesforprogrammingapplicationsvaries
dependingonthespecificdatabasesystem.NoSQLdatabasesaredesignedtobeflexible,
Accommodating a variety of data models, and hence, they support different programming languages. Some of the common
languages used in NoSQL applications are:
2. **Python:** Python is widely adopted in the NoSQL ecosystem. For instance, MongoDB provides
aPythondriverthatallowsdeveloperstointeractwiththe database usingPython. Python's simplicity and readability
make it a popular choice for working with NoSQL databases, especially in scenarios involving data analysis and
manipulation.
4. **Ruby:** Ruby is favored in certain NoSQL ecosystems, such as the use of Ruby on Rails with document-
oriented databases like CouchDB. Ruby's elegant syntax and ease of use make it suitable for web development
5. **C# (C-Sharp):** Microsoft's .NET framework and C# are utilized in conjunction with NoSQL databases like
Azure Cosmos DB. C# drivers are available for connecting and interacting with these databases, making it a choice
for developers within the Microsoft ecosystem.
It's important to note that the language used for programming NoSQL applications is often determined by the
specific database and the corresponding drivers or APIs provided by the database vendors.Developerscanchoose
thelanguagethatalignswiththeir expertiseand the requirements of their application.
References:
- MongoDB.(n.d.).MongoDBDrivers.Retrievedfromhttps://docs.mongodb.com/drivers/
4.1 Describe the below partitioning strategies in a NoSQL environment and Yes No
its related terms.
• Vertical partitioning
• Functional partitioning
Ans 4.1: In a NoSQL environment, partitioning strategies are crucial for distributing and managing large datasets
efficiently. Here are descriptions of the mentioned partitioning strategies:
1. **VerticalPartitioning:**
- **Description:** Vertical partitioning involves dividing a dataset based on columns or attributes. Different columns
of a table are stored separately, often on different nodes or servers. This strategy is useful when certain columns are
accessed more frequently than others, allowing for better resource utilization and improved query performance.
- **Related Terms:** Also known as columnar partitioning, this strategy contrasts with horizontal partitioning by
dividing data based on attributes rather than rows.
2. **HorizontalPartitioning("Sharding"):**
**Related Terms:** Sharding is a form of data partitioning where each shard is a self-
containedsubsetofthedata.Ithelpsinparallelizingqueriesandspreadingthedatastorage and processing load.
3. **FunctionalPartitioning:**
Q5: Answer the following questions regarding functions and features for Satisfactory
time-to-live (TTL) requirements: response
5.1. Explain functions for time-to-live (TTL) requirements. Write your answer Yes No
in 100-150 words.
5.2. Explain features for time-to-live (TTL) requirements. Write your answer
in 100-150 words.
3. **Cache Invalidation:** In caching systems, TTL is used to invalidate cached data after a certain time,
ensuring that users receive up-to-date information and preventing the serving of stale or obsolete content.
4. **Compliance and Privacy:** For compliance reasons or privacy concerns, TTL can be employed to enforce
data retention policies. It helps organizations adhere to regulations by automatically purging data that is no longer
required.
TTL requirements play a crucial role in maintaining data freshness, optimizing resource utilization, and
ensuring compliance with data management policies.
References:
• Wilson, P., Wilson, J., and Redmond, E. (2012).A guide to modern databases and the
NoSQL movement, Seven Databases in Seven Weeks. The sensible bookcase.
Ans5.2:FeaturesforTime-to-Live(TTL)requirementsindatabasesinclude:
1. **Configurable Expiry:** TTL features allow users to set and configure the
expirydurationforeachpieceofdata,enablingfine-grainedcontroloverhowlong information should be retained in the
database.
2. **Automated Data Deletion:** The primary function is the automatic removal of data once its TTL expires.
This feature helps maintain database hygiene by automatically cleaning up stale or obsolete records, reducing
ICTPRG533- Manage data persistence using Page 22 of 175
NoSQL data stores
V01.2022
storage overhead.
3. **Event-Driven Expiry:** Some systems provide event-driven TTL, allowing data to expire based on specific
events or triggers, ensuring that data removal aligns with the changing requirements of the application.
5. **Secondary Index Expiry:**In systems withsecondary indexes, TTL features may extend to automatically
removing associated index entries, ensuring comprehensive data cleanup.
Reference:
- Apache Cassandra Documentation. (n.d.). Time Window Compaction Strategy.Retrieved from
http://cassandra.apache.org/doc/latest/operating/compaction/twcs.html
Q6: Answer the following questions regarding the authorisation and Satisfactory
authentications procedures and levels of responsibility according to response
client access requirements:
Yes No
6.1. Explain authorisation and authentications procedures according to
client access requirements. Write your answer in 200-220 words.
Ans6.1:AuthorisationandAuthenticationProcedures:**
Authorisation and authentication are critical components of ensuring secure access to client systems.
Authentication verifies the identity of users, ensuring they are who they claim to be. Common methods include
passwords, multi-factor authentication, and biometrics. Authorisation, on the other hand, involves granting or
denying access rights based on authenticatedusers'permissions.Accesscontrollists,role-
basedaccesscontrol,andpolicies are commonly employed.
Client access requirements should dictate the choice of authentication methods to alignwith
securityneeds.Forexample,sensitivesystemsmayrequiremulti-factorauthentication,while less critical systems may
rely on password-based authentication.
Ans6.2:LevelsofResponsibilityforAuthorisationandAuthentication:**
- **FullAuthorisationResponsibility:**
- **Description:** Users with full authorisation responsibility have unrestricted access to the system,
including the ability to modify access controls and grant permissions.
- **Role:**Administratorsandsuperuserstypicallyholdfullauthorisationresponsibility.
**Reference:**
This aligns with the principle of least privilege, where users are granted the minimum level of access necessaryto
perform theirduties (Saltzer, D. J., & D. D. Clark,
1975).
- **RestrictedAuthorisationResponsibility:**
- **Description:** Users with restricted authorisation can manage certain aspects of authorisation, but their
abilities are limited compared to those with full responsibility.
- **Role:** Middle-tier administrators or managers may fall into this category, with the ability to manage
access within specific departments or projects.
- **Reference:** This helps distribute administrative tasks while still maintaining control over critical aspects
of the system (Anderson, 2008).
- **Role:** Regular users with limited or predefined roles and permissions fall into this category.
**References:**
Saltzer,D.J.,&D.D.Clark.(1975)."PrinciplesofSecureComputerSystems."Report.MIT- LCS-TR-341.
Anderson, R. (2008). "Security Engineering: A Guide to Building Dependable Distributed Systems." Wiley.
Gardezi, S. J. D., & Ab Razak, S. (2014). "Authentication and Authorization Architecture for Cloud Computing."
2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing. doi: 10.1109/UCC.2014.85
Q7: Answer the following questions regarding the distribution of data Satisfactory
storage across partitions: response
Yes No
Ans7.1:**7.1.DistributionofDataStorageAcrossPartitions:**
In the context of database management systems, the distribution of data storage across partitions refersto the practice of
dividing adataset into smallersubsets,each residing on separate storage unitsor servers. This approach, often known as
partitioning, is employed to enhance system performance, scalability, and manageability.
By distributing data across partitions, systems can achieve parallelism in processing, allowing for concurrent read and write
operations on different parts of the dataset. This is especially crucial in distributed databases and parallel computing
environments.
References:
- DeCandia,G.,Hastorun,D.,Jampani,M.,Kakulapati,G.,Lakshman,A.,Pilchin,A.,...Vogels,
W. (2007). "Dynamo: Amazon’s Highly Available Key-value Store." ACM SIGOPS Operating Systems Review, 41(6), 205–220.
doi:10.1145/1323293.1294281.
Q8: Answer the following questions regarding the debugging and testing Satisfactory
methodologies and techniques: response
8.1. Explain the debugging and testing methodologies. Write your Yes No
answer in 80-100 words.
• Deduction strategy
• Backtracking strategy
• Induction strategy
Ans8.1:**8.1.DebuggingandTestingMethodologies:**
Ans8.2:**DebuggingandTestingTechniques:**
1. **DeductionStrategy:**
- **Description:** This technique involves systematically analyzing code, identifying potential sources of errors through
logical reasoning, and deducing the most likely causes of issues.
- **Reference:**
- Börstler, J., Vihavainen, A., Paterson, J., & Adams, E. (2013). "Deductive Reasoning for the Automated Grading of
Prolog Programs." ACM Transactions on Computing Education
(TOCE), 13(4), 16. [DOI:
10.1145/2516760.2516776](https://dl.acm.org/doi/10.1145/2516760.2516776)
2. **DebuggingbyBruteForce:**
**Description:** Involves systematically trying different inputs, configurations, or code modifications to identify the source of
errors. It's an exhaustive approach to find solutions
3. **DebuggingbyTesting:**
- **Description:** This technique involves creating and executing tests to identify and isolate bugs. Testing strategies
include unit testing, integration testing, and system testing.
- **Reference:** Myers, G. J., Sandler, C., & Badgett, T. (2011). "The Art of Software Testing." John Wiley & Sons.
4. **BacktrackingStrategy:**
- **Description:** Involves systematically revisiting and reassessing decisions made during program execution to
identify and correct errors. It's a process of stepping backward to find the root cause of issues.
- **Reference:**
- Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1974). "The Design and Analysis of Computer Algorithms." Addison-Wesley.
- **Description:** Involves inferring potential causes of errors based on observed patterns or recurring issues. It relies
on inductive reasoning to generalize from specific instances.
- **Reference:**
- Pan, L., Zhang, Z., & Lu, S. (2011). "FastTrack: Efficient and Precise Dynamic Taint Analysis for Multithreaded
Programs." ACM Transactions on Computer Systems (TOCS), 29(4), 12. [DOI:
- 10.1145/2043676.2043681](https://dl.acm.org/doi/10.1145/2043676.2043681)
These debugging and testing techniques offer various approaches to identify, isolate, and resolve software defects in different
scenarios.
Q9: Answer the following questions regarding the functions and features Satisfactory
of sort keys in NoSQL storage: response
9.1. Explain functions and features of sort keys in NoSQL storage. List Yes No
the ranges supported by the Sort key. Write your answer in 300-350
words.
Ans9.1:**9.1.FunctionsandFeaturesofSortKeysinNoSQLStorage:**
**Data Organization:** Sort keys are used to organize data within a partition. They determine the order in which items are
stored and provide a mechanism for range queries.
**Composite Keys:** Many NoSQL databases support composite keys, which consist of a partition key and a sort key. This
combination allows for more granular organization and
querying,asitemsarefirstgroupedbythepartitionkeyandthensortedwithin thepartition based on the sort key.
**Range Queries:** Sort keys enable the execution of range queries, allowing users to retrieve data within a specified range.
For example, retrieving all records with timestamps between a start and end date.
**Customizable Sorting:** NoSQL databases provide flexibility in choosing the sort key criteria. This can include numeric
values, strings, or even complex data types, providing adaptability to diverse data structures.
**RangesSupportedbySortKeys:**
InAmazonDynamoDB,forinstance,therangeofsortkeyvaluesdependsonthetypeofkey
attribute(string,number,orbinary)anditslength.DynamoDBsupportsbothascendingand descending order for range queries.
References:
Explainfeaturesofauthentication.Writeyouranswerin150-180words.
Authentication involves verifying the identity of users or systems attempting to access resources. Key features of
authentication include multifactor authentication (MFA), which enhances security by requiring multiple credentials such as
passwords, biometrics, or tokens. Biometric authentication utilizes unique physical or behavioral characteristics like
fingerprints or facial recognition for identity verification. Strong password policies, including complexity requirements and
regular updates, are crucial in thwarting unauthorized access. Single Sign-On (SSO) streamlines user access by allowing them
to log in once to access multiple systems. Time-based access controls restrict user permissions to specific time frames,
reducing the risk of unauthorized usage. Additionally, robust authentication protocols, like OAuth and OpenID, facilitate
secure third-party access without exposing user credentials. These features collectively ensure a layered and resilient
authentication framework, fortifying systems against unauthorized access and safeguarding sensitive information.
Explainfeaturesofauthorisation.Writeyouranswerin100-130words.
Authorization involves granting or denying access to resources based on verified identities. Key features include role-based
Q11 Answer the following questions regarding the different NoSQL data Satisfactory
: store formats: response
• Key-value
• Document-based
• Column based
• Graph-based
NoSQL databases are designed to handle diverse and unstructured data, and they employ various data models
to address specific use cases. Here are explanations of different NoSQL data store formats:
Key-Value Retailers:
Key-value stores are the simplest NoSQL databases, associating each piece of data with a unique key. The data is stored as a
collection of key-value pairs, allowing for efficient retrieval and storage. These databases are highly scalable and per formant.
Examples include Redis and Amazon DynamoDB. Key-value stores are suitable for scenarios where data access is primarily
based on a single key.
Document-Oriented Storage:
Document-based databases store data in flexible, JSON-like documents. Each document contains key-value pairs, and
collections of documents can be grouped together. MongoDB is a popular document-based NoSQL database. This format is
beneficial for handling semi-structured or hierarchical data, as documents can have nested structures, arrays, and
Column-Based Retailers:
Column-family or column-based stores organize data into columns rather than rows. Each column family contains rows with a
unique key and columns associated with data attributes. Apache Cassandra and HBase are examples of column-based
databases. This model is efficient for read-heavy workloads, as it allows for fast data retrieval of specific attributes and enables
efficient storage of large amounts of sparse data.
Graph-Based Retailers:
Graph databases are designed for handling data with complex relationships. They model data as nodes, edges, and properties,
representing entities, connections, and attributes. Neo4j is a widely-used graph database. This format is particularly useful for
applications involving highly interconnected data, such as social networks, fraud detection, or recommendation engines.
Graph databases excel in traversing relationships between entities and querying graph patterns.
Each NoSQL data store format has its strengths and weaknesses, making them suitable for different use cases. Key-value
stores are optimal for high-performance scenarios with simple data access patterns. Document-based stores are versatile and
accommodate dynamic schemas. Column-based stores excel in analytical processing and scalability. Graph-based stores are
powerful for applications with intricate relationships. Organizations often choose NoSQL databases based on their specific
data requirements and performance objectives, embracing the flexibility and scalability offered by these diverse data models.
Q12 Answer the following questions regarding the different NoSQL data Satisfactory
: types response
• Numeric/integer
• String
• Boolean
• Date time
NoSQL databases support a variety of data types to accommodate the diverse and dynamic nature of unstructured or semi-
Date time data types are employed to represent dates and times in NoSQL databases. They are crucial for scenarios that
require temporal information, such as event timestamps, deadlines, or scheduling. Date time data types can store
information at various levels of precision, including year, month, day, hour, minute, second, and even fractional seconds.
This enables accurate chronological ordering of data and facilitates queries based on time ranges or specific points in time.
Proper handling of date time data is essential for applications involving time-sensitive operations, analytics, or historical
data retrieval.
In summary, these NoSQL data types—string, Boolean, and date time—offer versatility in handling textual information,
binary states, and temporal data, respectively. Their effective utilization contributes to the flexibility and efficiency of
NoSQL databases, enabling them to address a wide range of data storage and retrieval requirements in diverse application
scenarios.
Feedback:
Second attempt:
Student I declare that the answers I have provided are my own work. Where I
Declaration
have accessed information from other sources, I have provided
references and/or links to my sources.
I have kept a copy of all relevant notes and reference material that I
used as part of my submission.
I have provided references for all sources where the information is not
my own. I understand the consequences of falsifying documentation and
plagiarism. I understand how the assessment is structured. I accept that
the work I submit may be subject to verification to establish that it is my
own.
Student Signature
Date
Trainer/Assessor
Name
Trainer/Assessor I hold:
Declaration
Vocational competencies at least to the level being delivered
Current relevant industry skills
Current knowledge and skills in VET, and undertake
Ongoing professional development in VET
Trainer/Assessor
Signature
Date
Office Use Only The outcome of this assessment has been entered into the Student
Management System
This is the second (2) assessment task you must successfully complete to be deemed
competent in this unit of competency.
This assessment task is a Skills Test.
This assessment task consists of thirty (30) practical demonstration activities.
You will receive your feedback within two (2) weeks, and you will be notified by your
trainer/assessor when your results are available.
You must attempt all activities of the project for your trainer/assessor to assess your
competence in this assessment task.
Applicable conditions:
This skill test is untimed and is conducted as an open book assessment (this means you
are able to refer to your textbook or other learner materials during the test).
You will be assessed independently on this assessment task.
No marks or grades are allocated for this assessment task. The outcome of the task will be
Satisfactory or Not Satisfactory.
As you complete this assessment task, you are predominately demonstrating your skills,
techniques and knowledge to your trainer/assessor.
Your trainer/assessor may ask you relevant questions during this assessment task
Where a student’s answers are deemed not satisfactory after the first attempt, a
resubmission attempt will be allowed.
The student may speak to their trainer/assessor if they have any difficulty in completing
this task and require reasonable adjustments.
For more information, please refer to the Training Organisation’s Student Handbook.
✘ a classroom
learning management system (i.e. Moodle),
workplace,
or an independent learning environment.
Your Trainer/Assessor will provide you with further information regarding the location for
completing this assessment task.
The purpose of this assessment task is to assess the student’s knowledge and skills essential to
Manage data persistence using noSQL data stores in a range of contexts and industry settings.
Skill to create at least three different queries, including updating, deleting and creating
data types
Skill to create at least two indexes.
Skill to specify partition and sort keys
Skill to optimise the data.
Task instructions
This is an individual assessment.
The task will be completed in your training organisation’s IT lab.
The trainer/assessor will provide the required resources to the student/trainee.
According to organisational needs, the student will Manage data persistence using NoSQL
data stores.
Word-limit for this assessment task is given in the template.
The trainer/assessor assures simulated environmental conditions for the students/trainee
for Manage data persistence using NoSQL data stores.
The student must document completed developments.
The student must use the templates provided to document their responses.
The student must follow the word limits specified in the templates.
The trainer/assessor must assess the student using the performance checklist provided
The following forms the basis of the evidence that you need to collect from students for assessment in
this assessment task. The task and specific assessment requirements that are given to students are
also outlined.
o The outcome of the assessment tasks is either Satisfactory (S) or Not Satisfactory (NS).
o Feedback to the student
o The student declaration
o The Trainer/Assessor declaration
The trainer/assessor and the student must sign the Assessment Result Sheet to show that the
student was provided with the task outcome.
The Unit Mapping identifies what aspects of the Unit of Competency are being addressed in each
assessment task.
Once all assessment tasks allocated to this Unit of Competency have been undertaken, the
Student’s Assessment Plan (point 5 in the Student Pack) is completed to record the unit
outcome. The outcome will be either Competent (C) or Not Yet Competent (NYC).
When all assessment tasks are deemed Satisfactory (S), the unit outcome is Competent (C).
If at least one assessment task is deemed Not Satisfactory (NS), the unit outcome is Not Yet
Competent (NYC).
The following Information is attached to each assessment task:
o Assessment type
o Assessment task description
o Applicable conditions
o Re submissions and reattempts
o Location
o Instructions for completion of the assessment task
o How trainers/assessors will assess the work
o Task-specific instructions for the student
To create three different queries, including updating, deleting and creating data types for the
NoSQL data store.
To create at two indexes.
To specify partition and sort keys.
To optimise the data.
Activity 1: Confirm use and application for NoSQL according to business requirements and needs
Activity 2: Research and compare horizontal and vertical scaling and confirm relevance and
benefit of horizontal scaling according to business requirements
Activity 3: Research and compare NoSQL technologies and traditional relational data models
Activity 4: Research, review and select NoSQL vendor technologies according to business
requirements
Activity 5: Design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements
Activity 6: Review and select required types of NoSQL data store according to business
requirements
Activity 7: Create partition key and determine storage place of data items
Activity 8: Review and determine required partition key and ensure effective distribution of
storage across partition
Activity 9: Determine and select the required sort key according to business requirements
Activity 10: Calculate, determine and configure read and write through-puts according to
business requirements
Activity 11: Determine, configure and create indexes for optimising data retrieval queries
Activity 12: Determine and create additional indexes
Activity 13: Optimise data queries and retrievals for indexes according to business requirements
Activity 14: Determine and configure time-to-live (TTL) on data objects according to business
requirements
Activity 15: Research and select required API client for interacting with NoSQL data store
according to business requirements
Activity 16: Substantiate and connect API client to NoSQL data store instance
Activity 17: Insert single data object into NoSQL datastore using the selected client application
Activity 18: Insert multiple items in a single operation
Activity 19: Use query and select single object
Activity 20: Use query and retrieve multiple objects in batch
Activity 21: Perform query against the index
Activity 22: Perform query to select required attributes and project results
Activity 23: Delete single and multiple objects according to business requirements
Activity 24: Update single and multiple objects according to business requirements
Activity 25: Persist objects with different data types
Activity 26: Configure and confirm change event triggers and notifications according to business
needs
Activity 27: Test, fix and ensure responses and trigger notifications work according to business
requirements
Case Study:
Company profile:
Quipmart solutions’ is a top E-commerce company in Australia. Quipmart solutions provide essential
goods and services to its customers and have several branches from where a large amount of data is
collected in their database daily with their main headquarters in Sydney, Australia. Our main objective
is to provide services to handle such large sets of data, i.e., big data in the area of big data, as well as
presenting big data insights. The company place greater emphasis on understanding the customer
need, budget and resources.
Customer’s information.
Seller’s information on the website.
Employee’s information.
Information about each product added to the website.
Information about each product sold/bought.
Warehouse stock data at every branch.
Recently, Company has switched its data stores from RDBMS to NoSQL databases. The company has
approached your training organisation to create three different queries, including updating, deleting
and creating data types for the NoSQL data store. You are required to use a Key-value type NoSQL
database using DynamoDB from AWS services.
You are a senior software developer at your training organisation. The management of the organisation
approached you to complete the project. You are required to create three different queries, including
updating, deleting and creating data types for the NoSQL data store and to create at two indexes. You
also must specify partition and sort keys as well as optimise the data. This includes:
Task conditions
The purpose of this assessment task is to create three different queries, including updating,
deleting and creating data types for the NoSQL data store and to create two indexes. Also, to
specify partition and sort keys as well as optimise the data.
This assessment task will be completed in your training organisation’s IT lab. Your
trainer/assessor will supervise you in performing this assessment task.
Business needs
Businesses must show up to continuous availability to cope up with different sorts of data
transactions at any point in time and in difficult situations.
Response times must be fast enough and can handle the most intense operations for the applications.
To gain Scalability
NoSQL database must handle data partitioning across multiple servers to meet the increasing data
storage requirements.
The schema-less structure of the NoSQL databases should come up easily with the changes coming
with time. There must be a universal index provided for structure, values and text found in the data
so that, the organisation can cope with the changes immediately using this information.
NoSQL database must cater well to its data requirements. For example, let it be a simple object-
oriented structure or a highly complex, inter-related data structure, these unstructured databases
must well meet all kinds of data needs of the applications. Simple binary values, strings, lists to
complex parent-child hierarchies, related information values, graph stores etc., should all be well
handled with a Not-Only SQL database.
Business requirements
a) All Services credentials such as Account Numbers, account passwords, User names/identifiers
(user IDs) and user passwords must be kept confidential and must not be disclosed to an
unauthorised party. No one from Service Provider will ever contact you and request your
credentials.
b) If the third party or third-party software or proprietary system or software used to access
Service Provider, data/systems are replaced or no longer in use, the passwords should be
changed immediately.
c) Create a unique user ID for each user to enable individual authentication and accountability
for access to Service Provider’s infrastructure. Each user of the system access software must
also have a unique logon password.
d) User IDs and passwords shall only be assigned to authorised individuals based on the least
privilege necessary to perform job responsibilities.
e) User IDs and passwords must not be shared, posted, or otherwise divulged in any manner.
f) Develop strong passwords that are:
• Not easily guessable (i.e. your name or company name, repeating numbers and letters or
consecutive numbers and letters)
• Contain a minimum of eight (8) alphabetic and numeric characters for standard user
accounts
• For interactive sessions (i.e. non-system-to-system), ensure that passwords/passwords
are changed periodically (every 90 days is recommended)
g) Passwords (e.g. account passwords, user password) must be changed immediately when any
suspicion of the password being disclosed to an unauthorised party (see section 4.3 for reporting
requirements)
h) Ensure that passwords are not transmitted, displayed or stored in clear text; protect all end-
user (e.g. internal and external) passwords using, for example, encryption or a cryptographic
hashing algorithm also known as “one-way” encryption. When using encryption, ensure that a
strong encryption algorithm is utilised (e.g. AES 256 or above).
i) Implement password protected screensavers with a maximum fifteen (15) minute timeout to
protect unattended workstations. Systems should be manually locked before being left
unattended.
Task Instructions:
The student will create three different queries, including updating, deleting and creating data
types for the NoSQL data store.
You will create two indexes.
You will specify partition and sort keys.
Skill test:
To create three different queries, including updating, deleting and creating data types for the
NoSQL data store.
To create at two indexes.
To specify partition and sort keys.
To optimise the data.
Activity 1: Confirm use and application for NoSQL according to business requirements and needs
Activity 2: Research and compare horizontal and vertical scaling and confirm relevance and
benefit of horizontal scaling according to business requirements
Activity 3: Research and compare NoSQL technologies and traditional relational data models
Activity 4: Research, review and select NoSQL vendor technologies according to business
requirements
Activity 5: Design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements
Activity 6: Review and select required types of NoSQL data store according to business
requirements
Activity 7: Create partition key and determine storage place of data items
Activity 8: Review and determine required partition key and ensure effective distribution of
storage across partition
Activity 9: Determine and select the required sort key according to business requirements
Activity 10: Calculate, determine and configure read and write through-puts according to
business requirements
Activity 11: Determine, configure and create indexes for optimising data retrieval queries
Activity 12: Determine and create additional indexes
Activity 13: Optimise data queries and retrievals for indexes according to business requirements
Activity 14: Determine and configure time-to-live (TTL) on data objects according to business
requirements
Activity 15: Research and select required API client for interacting with NoSQL data store
according to business requirements
Activity 16: Substantiate and connect API client to NoSQL data store instance
Activity 17: Insert single data object into NoSQL datastore using the selected client application
Activity 18: Insert multiple items in a single operation
Activity 19: Use query and select single object
Activity 20: Use query and retrieve multiple objects in batch
Activity 21: Perform query against the index
Activity 22: Perform query to select required attributes and project results
Activity 23: Delete single and multiple objects according to business requirements
Activity 24: Update single and multiple objects according to business requirements
Activity 25: Persist objects with different data types
Activity 26: Configure and confirm change event triggers and notifications according to business
needs
Activity 27: Test, fix and ensure responses and trigger notifications work according to business
requirements
ICTPRG533- Manage data persistence using Page 46 of 175
NoSQL data stores
V01.2022
Activity 28: Review and confirm data is encrypted and authorisation and authentications are
active according to user and client access requirements
Activity 29: Test and fix data persistence process according to business requirements
Activity 30: Document and finalise work according to business requirements.
Task Environment:
This assessment task will be completed in a simulated environment prepared by your training
organisation.
The simulated environment will provide you with all the required resources (such as the equipment and
participants, etc.) to complete the assessment task. The simulated environment is very much like a
learning environment where a student can practice, use and operate appropriate industrial equipment,
techniques, practices under realistic workplace conditions.
To confirm use and application for NoSQL according to business requirements and needs
To research and compare horizontal and vertical scaling and confirm relevance and benefit of
horizontal scaling according to business requirements
To research and compare NoSQL technologies and traditional relational data models
To research, review and select NoSQL vendor technologies according to business requirements
To design and determine data storage requirements from NoSQL data store according to
selected vendor technology and business requirements
To review and select required types of NoSQL data store according to business requirements
To create partition key and determine storage place of data items
To review and determine required partition key and ensure effective distribution of storage
across partition
To determine and select required sort key according to business requirements
To calculate, determine and configure read and write through-puts according to business
requirements
To determine, configure and create indexes for optimising data retrieval queries
To determine and create additional indexes
To optimise data queries and retrievals for indexes according to business requirements
To determine and configure time-to-live (TTL) on data objects according to business
requirements
To research and select required API client for interacting with NoSQL data store according to
business requirements
To substantiate and connect API client to NoSQL data store instance
To insert a single data object into NoSQL datastore using the selected client application
To insert multiple items in a single operation
To use query and select a single object
To use query and retrieve multiple objects in batch
To perform a query against the index
To perform a query to select required attributes and project results
To delete single and multiple objects according to business requirements
To update single and multiple objects according to business requirements
To persist objects with different data types
To configure and confirm change event triggers and notifications according to business needs
To test, fix and ensure responses and trigger notifications work according to business
requirements
ICTPRG533- Manage data persistence using Page 47 of 175
NoSQL data stores
V01.2022
To review and confirm data is encrypted and authorisation and authentications are active
according to user and client access requirements
To test and fix data persistence process according to business requirements
To document and finalise work according to business requirements.
This part of the activity requires you to confirm the use and application for NoSQL according to
business requirements and needs and document the outcomes using ‘Template 1’.
This activity requires you to confirm the use and application for NoSQL according to business
requirements and needs based on the information provided in the case study in cosulatation with your
assessor.
NoSQL databases are a flexible option that may be used for a range of business requirements. They
perform exceptionally well in situations that call for adaptable data models that can handle dynamic
and unstructured data, like key-value pairs or JSON. Businesses that require scalability will find
NoSQL databases especially beneficial as they offer smooth horizontal expansion to accommodate
increasing data quantities and user traffic. NoSQL databases are useful for high-performance
applications with low latency needs, such as gaming platforms and real-time analytics. They are also
appropriate for applications ranging from content management to IoT platforms because to their
distributed and fault-tolerant architecture and support for various data models.
NoSQL databases are the go-to option for companies with dynamic data difficulties looking for
scalable, effective, and flexible solutions because they perform best in settings where quick
development cycles, flexible schema evolution, and affordable storage are critical.
Session Store
Applications utilizing session stores benefit greatly from NoSQL databases like Redis or MongoDB.
They are ideal for storing session data in web applications because of their low latency and high read
and write loads. NoSQL databases' quick retrieval times and adaptable schemas help with effective
session management, guaranteeing smooth user experiences in dynamic, scalable settings.
User profile stores work nicely with NoSQL databases like Cassandra or MongoDB. They give you the
freedom to deal with a wide range of user data, including different traits and preferences. NoSQL
databases are perfect for dynamic, scalable systems like social networks or e-commerce platforms
where user information is diverse and always changing because of their effective read and write
capabilities, which provide speedy access to user profiles.
Applications involving the storage of metadata and content work well with NoSQL databases like
Couchbase and Elasticsearch. Their scalable and rapid access enables effective metadata retrieval,
and their flexible schema supports a wide range of content formats. Because of this, NoSQL
databases are perfect for apps, media platforms, and content management systems where managing
unstructured data and related metadata is essential to providing dynamic and responsive user
experiences.
Mobile Applications
NoSQL databases are essential to mobile applications, like Firebase or Realm. They are ideal for
mobile app development because of their versatility in handling different sorts of data, support for
offline functioning, and horizontal scalability. NoSQL databases guarantee smooth data
synchronization between devices, improving user experiences in messaging apps, real-time
collaborative apps, and other situations where scalable and flexible data storage is necessary for
mobile operation.
MongoDB and Apache Cassandra are two examples of NoSQL databases that are essential to third-
party data aggregation. Their adaptable schema architecture supports a wide range of data formats,
simplifying the assimilation and processing of data from many sources. NoSQL databases make it
possible to store and retrieve aggregated data efficiently, which makes it easier to integrate different
datasets in applications like business intelligence tools, analytics platforms, and systems that need
extensive third-party data aggregation.
Internet of Things
Social Gaming
NoSQL databases—like Redis or MongoDB—are essential to social gaming apps. Dynamic player data,
scoring, and in-game interactions are supported by their scalability and low-latency retrieval. Real-
time updates, customized gaming experiences, and effective management of user-generated material
are all made possible by NoSQL databases. This makes them perfect for social gaming, guaranteeing
fluidity and responsiveness in settings where quick access to data and adaptability are critical.
Ad Targeting
Ad targeting systems require NoSQL databases, such as Amazon DynamoDB or Apache Cassandra.
They provide dynamic ad personalization through their capacity to manage enormous volumes of
customer data and offer real-time access. In the ever evolving world of online advertising, NoSQL
databases make it easier to store and retrieve a wide range of user preferences and behaviors,
guaranteeing efficient targeting and optimum ad delivery.
Satisfactory
Feedback to student:
Student signature
Observer signature
Activity 2: Research and compare horizontal and vertical scaling and confirm the relevance and benefit
of horizontal scaling according to business requirements.
This activity requires you to research and compare horizontal and vertical scaling and confirm the
relevance and benefit of horizontal scaling according to business requirements based on the information
provided in the case study.
Research and document horizontal and vertical scaling and document using Template 2.
Compare horizontal and vertical scaling based on the following factors and document using
Template 2:
o Databases
o Downtime
o Concurrency
o Message passing
Determine benefits of horizontal scaling and document using Template 2.
Research and compare horizontal and vertical scaling and confirm relevance and benefit of
horizontal scaling according to business requirements (200-250 words)
Examples Businesses that must manage For enterprises with sporadic resource
higher user traffic might consider requirements, vertical scaling is
horizontal scaling. E-commerce essential. For example, a database
platforms have the capability to server with more RAM may handle
allocate workload among servers in increasing demands without becoming
order to guarantee smooth more sophisticated, guaranteeing peak
scalability even during moments of performance.
high demand.
Businesses can achieve more capacity and performance with orizontal scaling, which divides workloads
among several servers. It guarantees high availability, improves fault tolerance, and easily handles
increasing data and user needs. Improved performance during periods of peak usage, economical
scalability, and optimal resource utilization are all made possible by this me
A single server can instantly have its resources upgraded through vertical scaling, increasing both its
capacity and performance. This method offers a simple solution without the hassles of managing
several servers, making it advantageous for applications experiencing abrupt spikes in demand. It
guarantees rapid scalability, flexibility in response to shifting workloads, and little interference with
daily operations.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to research and compare NoSQL technologies and traditional
relational data models and document the outcomes using ‘Template 3’.
This activity requires you to research and compare NoSQL technologies and traditional relational data
models.
Conduct online research using web browsers and document comparison between NoSQL
technologies and traditional relational data models using Template 3.
Comparison between NoSQL technologies and traditional relational data models (150-200
words)
Flexible, schema-free architecture that can a tabular, structured format with pre-established
handle data that is semi-structured or schemas that uses relationships to enforce data
unstructured. integrity.
Because there is no fixed schema, it can be less complicated queries are common, particularly
complex, allowing for speedy and flexible when involving complicated interactions between
queries. tables.
Maybe more consistent in the long run than It is perfect for applications demanding data
strictly adhering to ACID. Ideal in situations integrity for complicated transactions since it
when instant consistency is not necessary. guarantees robust ACID compliance.
Perfect for distributed systems, content ideal for systems with complicated transactions,
management systems, real-time analytics, and enterprise resource planning (ERP), financial
other applications with dynamic and changing systems, and other applications with well-defined
data structures. and reliable data structures.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to research, review and select NoSQL vendor technologies
according to business requirements and document the outcomes using ‘Template 4’.
This activity requires you to Research, review and select NoSQL vendor technologies according to
business requirements based on the information provided in the case study.
This activity requires you to use a web browser and do online research.
Research following type of NoSQL vendor technologies based on their strengths and
weaknesses and document using Template 4:
o Document store
o Wide-column store
o Key-value store
Review and select NoSQL vendors according to their strengths and weakness and document
using Template 4.
Document store
Document repository MongoDB and other NoSQL vendor technologies store data as flexible
documents that resemble JSON. These databases work effectively in situations where the
data structures are dynamic and ever-changing. For example, MongoDB shines in this area,
providing scalability, developer-friendliness, and compatibility for a wide range of data
formats. It enables companies to effectively store and retrieve data in a manner consistent
with the format of their papers.
Strengths –
Document repository NoSQL databases, like MongoDB, provide advantages like adaptable
schema design that can handle changing and dynamic data structures. Their scalability is
excellent, enabling effective horizontal scaling to accommodate increasing data volumes.
With documents that resemble JSON, document storage also support agile development by
making it simple to adjust to changing requirements. These databases work effectively in
situations requiring a variety of data formats and rapid development times.
Weakness –
Document repository One of the major drawbacks of NoSQL databases, like MongoDB, is
that denormalization may result in greater storage space, which could complicate updates.
It can be difficult to maintain data consistency between pages, and sophisticated queries
might execute more slowly than those in conventional relational databases. Developers
switching from relational databases to document stores' flexible schema model may also
encounter a learning curve.
Wide-column store
broad-column retailer Data is arranged in tables using NoSQL vendor technologies such
as Apache Cassandra, where rows are uniquely identified by keys and columns are
subject to row variation. Because of its high availability, scalability, and flexibility, this
architecture can handle massive volumes of time-series data. Fault tolerance and
continuous availability are ensured by Apache Cassandra's decentralized architecture and
distributed structure, particularly in distributed and highly scalable applications.
broad-column retailer NoSQL databases, like Apache Cassandra, have advantages including
fault tolerance, high availability, and scalability. They guarantee continuous data availability
even in dispersed situations thanks to their decentralized architecture. Because of its
adaptable schema, which allows for dynamic column addition, they can handle substantial
amounts of time-series data. When real-time applications and effective distributed data
management are required, wide-column stores perform exceptionally well.
Weakness –
broad-column retailer One of the possible drawbacks of NoSQL databases, such as Apache
Cassandra, is that their intricate setup and configuration may require a learning curve.
Multiple table transactions can be complex, and real-time analytics may encounter
difficulties. Furthermore, use cases needing intricate connections or situations requiring
instantaneous consistency across numerous dispersed nodes might not be the best fit for
wide-column storage.
Key-value store
Crucial value retailer Simple key-value pairs are used by NoSQL databases, such as Redis
and Amazon DynamoDB, to arrange data. Fast and effective data retrieval is made possible
by this structure's minimalism. Key-value stores can be used in situations where there is a
need for rapid access to unstructured or semi-structured data since they are extremely
scalable. Their superior performance and ease of use in use cases such as distributed
systems, caching, and session management allow for high-throughput access to stored
data.
Strengths –
Crucial value retailer Redis and other NoSQL databases are known for their scalability,
speed, and simplicity. They are excellent at delivering quick data retrieval via direct key
access, which makes them perfect for session management and caching. Key-value stores
provide high-performance access to stored data by effectively managing large datasets and
dispersed systems. Their straightforward design facilitates agile development for
applications with a range of data requirements and makes implementation easier.
Weakness –
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to design and determine data storage requirements from the
NoSQL data store according to selected vendor technology and business requirements and document
the outcomes using ‘Template 5’.
This activity requires you to design and determine data storage requirements from the NoSQL data
store according to selected vendor technology and business requirements based on the information
provided in the case study.
This activity requires you to use an AWS account and access the DynamoDB service selected in Activity
4.
Design and determine data storage requirements from NoSQL data store based on the following
scenarios:
o Data size
o Data shape
o Data velocity
Design and determine data storage requirements from NoSQL data store (DynamoDB)
(150-200 words)
Using the selected vendor technology, customize your NoSQL data store taking business
requirements into account. For flexibility, choose a document-oriented database such as
MongoDB. Create a productive data model that uses suitable indexing and is in line with
business entities. Modify storage configurations to satisfy scalability and performance
requirements, including cache size and compression. Install security measures in accordance
with corporate needs, set up backup plans, and keep an eye out for peak performance. Assess
data expansion on a regular basis, modify storage plans to meet changing company needs,
and keep your NoSQL solution flexible and strong.
Data size:
Using MongoDB, create a NoSQL document store based on the size and requirements of the business.
Make use of the adaptable indexing techniques, storage engine configurations, and schema offered by
MongoDB. Plan for scalability, prioritize read and write efficiency, and put security measures in place.
Maintain regular performance monitoring and tuning to make sure it's in line with company needs and
the expansion of data.
Data shape:
Use a document store, such as MongoDB, to store NoSQL data in accordance with business
requirements. Create a data model that can handle a variety of data shapes that fits the document-
oriented structure. A flexible and effective solution for a range of data structures and changing
business needs can be achieved by optimizing storage configurations, indexing, and security
procedures in accordance with business requirements.
Data velocity:
Use a document store for NoSQL data storage that can support different data velocities and business
needs, like MongoDB. Create a data model that is effective and able to manage data with high
velocity. Optimize indexing and storage configurations according to business needs to provide smooth
scalability for high-velocity, real-time data processing.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to review and select required types of NoSQL data store according
to business requirements and document the outcomes using ‘Template 6’.
This activity requires you to review and select required types of NoSQL data store according to business
requirements based on the information provided in the case study.
Column-oriented database
Key-value database
Key-value databases are a subset of NoSQL data They might not be the best option, though, for
storage designed to meet certain business intricate relationships or queries. Key-value
requirements. They provide excellent read and databases frequently match easily with the needs
write performance and scalability in basic data of businesses that prioritize scalability, simplicity,
models. Key-value databases, such as Redis or and speed—such as those that demand real-time
Amazon DynamoDB, are perfect for situations applications or cache layers—by offering scalable
where you need to retrieve data quickly based and effective data access.
on unique keys. They are also appropriate for
applications that need session storage or
caching.
Document database
A subset of NoSQL data stores known as Document databases offer effective searching and
document-oriented databases are very helpful to indexing for applications with intricate data
companies whose data structures are dynamic linkages and a variety of formats. Document-
and ever-changing. Well-known examples are oriented databases are a good fit for businesses
MongoDB and CouchDB, which store data in with changing needs, including content
adaptable documents like JSON and make management systems and e-commerce platforms,
managing a wide range of data easy. Because since they provide performance and flexibility for
these databases offer both dynamic schema a wide range of data kinds and formats.
updates and horizontal scaling, they are suitable
for enterprises that need to be scalable.
Graph database
Graph databases—a kind of NoSQL data store—
ICTPRG533- Manage data persistence using Page 71 of 175
NoSQL data stores
V01.2022
are great at handling intricate relationships However, other NoSQL kinds can be more
between entities, which makes them perfect for appropriate for less complex, non-relational data.
companies that emphasize data that is related. Graph databases find a place in applications
Neo4j and Amazon Neptune are two notable where precisely and quickly achieving business
instances. These databases handle situations like objectives depends on comprehending and
social networks, fraud detection, or utilizing relationships among data elements.
recommendation engines by utilizing graph
topologies and queries to quickly browse and
evaluate complex connections. Businesses can
make more informed decisions by gaining
insights into relationships through the use of
graph databases.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to create a partition key and determine the storage place of data
items, and document the outcomes using ‘Template 7’.
This activity requires you to create a partition key and determine the storage place of data items.
This activity requires you to use an AWS account and access the DynamoDB service.
To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.
• Create a Table from the “sampledata.docx” file and give it the name ‘Table1’.
• Create partition key with the name ‘ID’.
• Add attributes and values for items in ‘Table1’ by taking reference from “sampledata.docx”.
• Determine the storage place of data items using the Scan or Query button.
A partition key is an important notion in the context of databases, particularly in NoSQL databases
like DynamoDB. Data is divided among several storage nodes using the partition key, allowing for
scalability and effective data retrieval. Here's a generic how-to for making a partition key and
Choose a field from your dataset to use as the partition key. To spread data among several
partitions, utilize this field.To lessen the possibility of "hot" partitions with excessively high data
It's usually a good idea to use a natural property found in your data, such as a product
The partition key in NoSQL databases, such as DynamoDB, determines where data items are
stored. This is how it operates:
Dividends:
Using a partition key, DynamoDB divides data among several servers.
Every partition has a unique throughput and storage capacity.
Dividend Key:
Every element in the table has a distinct identity thanks to the partition key.
The partition where the data item is kept is identified by DynamoDB using the value of the
partition key.
Distribution:
A partition stores data objects that have the same value for the partition key.
In order to avoid hotspots or uneven data distribution, DynamoDB distributes things across
partitions in an equitable fashion.
Best Possible Performance:
Selecting an appropriate partition key is essential for uniform distribution and maximum
efficiency.
In order to maximize performance, it makes sure that read and write operations are
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to review and determine the required partition key and ensure
effective distribution of storage across the partition, and document the outcomes using ‘Template 8’.
This activity requires you to review and determine the required partition key and ensure effective
distribution of storage across the partition.
This activity requires you to use an AWS account and access the DynamoDB service.
ID FName LName
Satisfactory
Feedback to student:
Student signature
Observer signature
Activity 9: Determine and select the required sort key according to business requirements.
This activity requires you to determine and select the required sort key according to business
requirements based on the information provided in the case study.
To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.
• Determine and select the required sort key using the below steps:
o Create another table with the name ‘Table 2’ from taking data reference from
“sampledata.docx”.
o Create Partition key ‘ID’.
o Select and Create Sort Key ‘age’ from table 2.
o Add attributes and values for items in ‘Table2’ by taking reference from
“Sampledata.docx”.
In DynamoDB tables in particular, a sort key is necessary for effective data organization and
querying. It makes it possible to arrange items inside a partition according to particular
criteria. This facilitates the retrieval of material in a sorted sequence and allows range
searches. In NoSQL databases, sort keys improve query flexibility by enabling the retrieval
of items depending on a range of values. This leads to better performance and customized
data retrieval.
In Table2 we have create ‘Age’ as a sort key to be distributed data in a correct manner.
Satisfactory
Feedback to student:
Student signature
Observer signature
Activity 10: Calculate, determine and configure read and write through-puts according to business
requirements.
This part of the activity requires you to calculate, determine and configure read and write through-puts
according to business requirements and document the outcomes using ‘Template 10’.
ICTPRG533- Manage data persistence using Page 87 of 175
NoSQL data stores
V01.2022
Description of the activity
This activity requires you to calculate, determine and configure read and write through-puts according
to business requirements based on the information provided in the case study.
This activity requires you to use an AWS account with DynamoDB and CloudWatch services.
Read throughput
Read Capacity Units (RCUs) are used in Amazon DynamoDB to assess read performance metrics.
One highly consistent read per second or two eventually consistent reads are equal to one RCU.
RCUs give users the ability to allocate and modify resources to match their DynamoDB tables'
unique read performance requirements.
Write throughput
Write Capacity Units (WCUs) are used in Amazon DynamoDB to measure write performance
metrics. For items up to 1 KB in size, one WCU denotes the ability to execute one write operation
per second. With WCUs, users can provide and modify resources to satisfy their DynamoDB
tables' unique write performance requirements.
Satisfactory
Feedback to student:
Student signature
Observer signature
Activity 11: Determine, configure and create indexes for optimising data retrieval queries.
This activity requires you to determine, configure and create two (2) indexes for optimising data
retrieval queries.
This activity requires you to use an AWS account with the DynamoDB service.
To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.
Determine, configure and create two (2) indexes for optimising data retrieval queries (80-
100 words)
Analyze query patterns to find commonly run queries before optimizing data retrieval queries. Based on
JOIN operations and filtering conditions, select the relevant columns. For important filtering attributes,
create single-column indexes using the syntax CREATE INDEX idx_column1 ON your_table(column1).
For queries with numerous conditions, take into account composite indexes as well. For example,
eye on index utilization, and update statistics on a regular basis. Analyze and test how indexes affect
the speed of queries. Make use of database tools to get optimization recommendations. As the
database changes, make important to periodically review and modify your indexing strategies to
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to determine and create additional indexes and document the
outcomes using ‘Template 12’.
This activity requires you to use an AWS account with the DynamoDB service.
Identify locations with high resource usage or sluggish response times by analyzing query execution
plans and creating additional indexes for improved database performance. Pay attention to columns
that are commonly used in JOIN conditions or WHERE clauses. To create indexes for selected columns,
use SQL commands such as CREATE INDEX, taking into account single-column and composite indexes
according to query needs. Track database performance and modify indexing tactics as necessary.
Evaluate and reevaluate how new indexes affect the execution of queries on a regular basis. As the
database changes, experiment with various indexing setups and make use of database tools to optimize
indexes for maximum efficiency in data retrieval.
f) Clicked create.
Satisfactory
Feedback to student:
Observer signature
Activity 13: Optimise data queries and retrievals for indexes according to business requirements.
ICTPRG533- Manage data persistence using Page 100 of 175
NoSQL data stores
V01.2022
This part of the activity requires you to optimise data queries and retrievals for indexes according to
business requirements and document the outcomes using ‘Template 13’.
This activity requires you to optimise data queries and retrievals for indexes according to business
requirements based on the information provided in the case study.
This activity requires you to use an AWS account with the DynamoDB service.
• Optimise data queries and Retrievals using the below techniques for indexes created in the
activity ‘11’ and activity ‘12’:
o Filtered data retrieval
o Sorted data retrieval
o Range-based data retrieval
Optimise data queries and Retrievals using the below techniques (100-150 words)
Use range queries, indexing to filter columns, and WHERE clauses to optimize data queries. For
effective data retrieval, make use of pagination, aggregation functions, and full-text search. Think
about stored processes, caching, and frequent observation. Use storage optimization and
compression strategies, as well as database-specific features, to improve filtered data retrieval in a
focused and effective way.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to determine and configure time-to-live (TTL) on data objects
according to business requirements and document the outcomes using ‘Template 14’.
This activity requires you to determine and configure time-to-live (TTL) on data objects according to
business requirements based on the information provided in the case study.
This activity requires you to use an AWS account with the DynamoDB service.
• Determine and Configure time-to-live (TTL) on data objects, i.e. data items using the below
steps:
o Login to AWS Management Console
o Navigate to Table2.
o Proceed to Table details
o Enter the ‘FName’ for the TTL attribute.
o Click Continue.
Optimise data queries and Retrievals using the below techniques (80-100 words)
Information in a system must have a lifespan, which is configured by setting the Time-to-Live (TTL)
on data objects. The duration of data validity is determined by this parameter before it expires. To
control data freshness, TTL is frequently used in database systems and caching techniques.
Organizations can manage how long data is cached or retained and make sure it still represents
relevance in real time by setting a TTL value. TTL configuration is necessary to maximize system
efficiency, reduce storage expenses, and preserve data accuracy by automatically deleting or
refreshing out-of-date information within the designated period of time.
Satisfactory
Feedback to student:
Student signature
Observer signature
Activity 15: Research and select required API client for interacting with NoSQL data store according to
business requirements.
This activity requires you to research and select the required API client for interacting with the NoSQL
datastore selected in “Activity 6” according to business requirements based on the information provided
in the case study.
This activity requires you to use an AWS account with the DynamoDB service.
Note: Before you can use the AWS SDKs with DynamoDB, you must get an AWS access key ID and
secret access key.
Research and select required API client for interacting with NoSQL data store (80-100
words)
Use a well-known third-party client that supports the particular NoSQL database or the official API
client offered by the database vendor to communicate with a NoSQL data store. The official MongoDB
Atlas Data API client, for instance, is available from MongoDB. The AWS SDKs, which include boto3
for Python, are provided by DynamoDB. DataStax drivers exist for Cassandra. Based on your
programming language and the NoSQL database's compatibility, select the API client. Ascertain that
the client offers the functionality, community support, and documentation required for a smooth
integration and productive engagement with the selected NoSQL data store.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to substantiate and connect the API client to the NoSQL data store
instance and document the outcomes using ‘Template 16’.
This activity requires you to substantiate and connect the API client to the NoSQL data store instance.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Substantiate and connect API client to NoSQL data store instance using the below steps:
o Install or update Python
o Use the AWS Common Runtime (CRT)
o Complete AWS configuration
Substantiate and connect API client to NoSQL data store instance (80-100 words)
Visit the official Python website, download the most recent version, and launch the installer to begin
installing Python. During installation on Windows, be sure to tick "Add Python to PATH". Use
Homebrew (brew install python) on macOS. Use the package manager on Linux (sudo apt install
python3). Use python --version or python3 -V to confirm the installation.
AWS Common Runtime (CRT) is a suite of libraries for building cross-platform apps using AWS
services. It offers similar APIs for various programming languages, making it easy for developers to
interact with AWS. This improves the development and deployment process overall by guaranteeing
effective and dependable communication between applications and AWS services
Establishing Identity and Access Management (IAM) users, configuring security settings, generating
and configuring Amazon S3 buckets, obtaining access keys, and creating an AWS account are all
necessary steps in configuring AWS. Use the AWS SDKs or CLI to gain programmatic access. For
maximum security and effectiveness, evaluate and manage setups on a regular basis.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to insert a single data object into NoSQL datastore using a
selected client application and document the outcomes using ‘Template 17’.
This activity requires you to insert a single data object into a NoSQL datastore using the selected client
application.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
To perform this activity, you need to refer to data from the “sampledata.docx” file provided with this
assessor pack.
• Launch AWS CLI and accessed DynamoDB with API Client access keys.
• Create/Insert a New Item/data Object into ‘Table2’ by taking reference from
“Sampledata/docx”.
• Add the Code to python (.py) file.
• Upload the file using the AWS CLI.
, you must:
• Take a screenshot of each step implemented to insert a single data object into a NoSQL
datastore and submit it to the trainer/assessor via e-mail.
• Document the steps implemented to insert a single data object into a NoSQL datastore using
Template 17.
Insert single data object into NoSQL datastore using a selected client application (80-100
words)
Connect to the datastore using the NoSQL client application of your choice. Make a new data object
with the fields and values that you want. To enter the data object into the datastore, use the
command-line interface (CLI) provided by the client application. Verify that the data complies with
the database's specified schema or structure. Check for any error warnings or use the proper queries
to retrieve the entered data to confirm that the insertion was successful. Finish the insertion
procedure by committing the modifications to keep the new data in the NoSQL datastore.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to insert multiple items in a single operation and document the
outcomes using ‘Template 18’.
This activity requires you to insert multiple items in a single operation from “sampledata.docx” file.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Create a JSON file and write code to insert multiple items for ‘Table3’ by taking reference from
“sampledata.docx”.
• Link the JSON file into python code.
• Run the python code on AWS CLI.
import json
json_file_path = 'Table3.json'
Table3 = data['Table3']
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to use query and select a single object, and document the
outcomes using ‘Template 19’.
This activity requires you to use a query and select a single object.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with DynamoDB query to select the single object from ‘Table3’.
• Run the python code on AWS CLI.
import boto3
from boto3.dynamodb.conditions import Key, Attr
aws_access_key_id = 'your-access-key-id'
aws_secret_access_key = 'your-secret-access-key'
region_name = 'us-west-2'
table_name = 'Table3'
dynamodb = boto3.resource('dynamodb',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=us-west-2)
table = dynamodb.Table(Table3)
response = table.query(
KeyConditionExpression=Key('FName').eq('Diana')
)
items = response['Items']
for item in items:
print(item)
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to use query and retrieve multiple objects in batch and document
the outcomes using ‘Template 20’.
This activity requires you to use queries and retrieve multiple objects in a batch.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with DynamoDB query to select multiple objects from ‘Table3’ in batch
• Run the python code on AWS CLI.
Python code with DynamoDB query to select multiple objects in batch example:
import boto3
from boto3.dynamodb.conditions import Key
aws_access_key_id = 'your-access-key-id'
aws_secret_access_key = 'your-secret-access-key'
region_name = 'us-west-2'
table_name = 'Table3'
dynamodb=boto3.resource('dynamodb',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=us-west-2)
table = dynamodb.Table(table_name)
response = table.batch_get_item(
RequestItems={
Table3: {
'Keys': [
{'FName': 'Diana'},
{'LName': 'johnson'},
],
}
}
)
selected_items = response['Responses'].get(table_name, [])
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to perform a query against the index and document the outcomes
using ‘Template 21’.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with DynamoDB query against the index for ‘Table3’ in python (.py) file.
• Run the python (.py) file on AWS CLI.
{
TableName: "Table3",
IndexName: "AgeAndIdIndex",
KeyConditionExpression: "Genre = :genre",
ExpressionAttributeValues: {
":genre": "Rock"
},
}; id:
{
TableName: "Table3",
IndexName: "id-Index",
KeyConditionExpression: "Genre = :genre and id < :id",
ExpressionAttributeValues: {
":id": "024",
":Age": 30
},
ProjectionExpression: "Id,Age"
};
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to perform a query to select required attributes and project results
and document the outcomes using ‘Template 22’.
This activity requires you to perform a query to select the required attributes and project results.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with the query to select required attributes and project results from ‘Table3’
in the python (.py) file.
• Run the python (.py) file on AWS CLI.
Perform query to select required attributes and project results (80-100 words)
Python code with the query to select required attributes and project results example:
{
"TableName": "Table3",
"IndexName": "id-Index",
"Limit": 3,
"ConsistentRead": true,
"ProjectionExpression": "Id, PostedBy, ReplyDateTime",
"KeyConditionExpression": "Id = :v1 AND PostedBy BETWEEN :v2a AND :v2b",
"ExpressionAttributeValues": {
":v1": {"S": "Amazon DynamoDB#DynamoDB Thread 1"},
":v2a": {"S": "User A"},
":v2b": {"S": "User C"}
},
"ReturnConsumedCapacity": "TOTAL"
}
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to delete single and multiple objects according to business
requirements and document the outcomes using ‘Template 23’.
This activity requires you to delete single and multiple objects according to business requirements
based on the information provided in the case study.
This activity requires you to use an AWS account with DynamoDB and AWS SDK for Python services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Write python code with the query to delete the single object from ‘Table3’ in the python (.py)
file.
• Run the python (.py) file on AWS CLI.
• Write python code with the query to delete multiple objects from ‘Table3’ in the python (.py)
file.
• Run the python (.py) file on AWS CLI.
Delete single and multiple objects according to business requirements (80-100 words)
sqlite3.connect('example.db') = conn
conn.cursor() = cursor
pointer.perform("INSERT INTO users (name, age) VALUES (?,?)", ('John Smith', 26))
pointer.perform("INSERT INTO users (name, age) VALUES (?,?)", ('John Smith', 26))
"INSERT INTO users (name, age) VALUES (?,?)", (‘Diana Johnson', 24))
pointer.conn.commit() execute("DELETE FROM users WHERE id=?", (object_id,))
remove_one_object_by_id (021)
cursor.fetchall() = remaining_users
import Sqlite3
sqlite3.connect('example.db') = conn
conn.cursor() = cursor
The code is executed by cursor.execute("INSERT INTO users (name, age) VALUES (?,?)", 'John
smith', 26)) and 'celeb peter', 30)).
The code is executed by cursor.execute("INSERT INTO users (name, age) VALUES (?,?)", 'Diana
Johnson', 24)) and 'narrin Brown', 30)).
remove_many_objects_by_age (30)
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to update single and multiple objects according to business
requirements and document the outcomes using ‘Template 24’.
This activity requires you to update single and multiple objects according to business requirements
based on the information provided in the case study.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Update single item in ‘Table3’ using SET, REMOVE, ADD, or DELETE keyword.
• Update multiple items in ‘Table3’ using batch operation.
Update single and multiple objects according to business requirements (80-100 words)
dynamodb=boto3.resource('dynamodb',
region_name=aws_region,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
import boto3
aws_access_key_id = 'your-access-key-id'
aws_secret_access_key = 'your-secret-access-key'
aws_region = 'us-west-2'
table_name = 'Table3'
dynamodb=boto3.resource('dynamodb',
region_name=aws_region,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
table = dynamodb.Table(Table3)
items_to_update = [
{'Id': '025', 'NewAttribute': 'Diana '},
{'Id': '023', 'NewAttribute': 'caleb '},
]
def update_multiple_items(items):
for item in items:
update_multiple_items(items_to_update)
for item in items_to_update:
response = table.get_item(Key={'023': item['FName']})
updated_item = response.get('Item', {})
print(f"Updated item for UserId {item['023']}: {updated_item}")
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to persist objects with different data types and document the
outcomes using ‘Template 25’.
This activity requires you to persist objects with different data types.
This activity requires you to use an AWS account with DynamoDB and Amazon SDK .NET services.
• Launch AWS CLI and access DynamoDB with API Client access keys.
• Map objects, i.e. data items of table ‘3’ with DynamoDB Using the Amazon SDK for .NET Object
Persistence Model.
Mapping Data i.e data items with DynamoDB Using the Amazon SDK for .NET Object Persistence
Model
Selecting an appropriate database system and carefully designing your data model are essential when
storing objects with various data types. Create a table in a relational database with columns that
represent various data kinds. For smooth integration, use an Object-Relational Mapping (ORM)
framework. Each attribute in an item in a NoSQL database, such as DynamoDB, can have a distinct
data type, providing flexibility. For storing purposes, serialize things into a standard format, then
deserialize them upon retrieval. Take into account data validation and confirm that it complies with
the specifications and capabilities of the database.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to configure and confirm change event triggers and notifications
according to business needs and document the outcomes using ‘Template 26’.
This activity requires you to configure and confirm change event triggers and notifications according to
business needs based on the information provided in the case study.
This activity requires you to use an AWS account with DynamoDB and AWS Lambda services.
• Configure and confirm change event triggers and notifications on your Supervisor’s email
according to business needs using the below steps:
o Step 1: Enable Stream for ‘Table3’.
o Step 2: Create a Lambda Execution Role
o Step 3: Create an Amazon SNS Topic
Configure and confirm change event triggers and notifications according to business needs
(80-100 words)
Go to the AWS DynamoDB console, choose the table, click the "Overview" tab, and then click
"Manage DynamoDB Streams" to activate and set up the stream. This will enable DynamoDB Streams
for that particular table.
Using the IAM console, choose "Roles," create a new role, attach the
"AWSLambdaBasicExecutionRole" policy, and add inline policies as necessary to create a Lambda
execution role in AWS.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to test, fix and ensure responses and trigger notifications work
according to business requirements and document the outcomes using ‘Template 27’.
This activity requires you to test, fix and ensure responses and trigger notifications work according to
business requirements based on the information provided in the case study.
This activity requires you to use an AWS account with DynamoDB, CloudWatch and AWS Lambda
services.
• Test and fix responses and trigger notifications work according to business requirements using
the below steps:
o Step 1: Create and Test a Lambda Function
o Step 2: Create and Test a Trigger
• Ensure responses and trigger notifications work by confirming it from Supervisor’s email.
Test, fix and ensure responses and trigger notifications work according to business
requirements (80-100 words)
Using the Lambda console, choose "Create function," establish triggers, runtime and code
parameters, and define environment variables to create an AWS Lambda function. Make sure the
function works as intended using sample or custom events by testing it with the "Test" button or AWS
CLI commands.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to review and confirm data is encrypted and authorisation and
authentications are active according to the user and client access requirements, and document the
outcomes using ‘Template 28’.
This activity requires you to review and confirm data is encrypted and authorisation and authentications
are active according to the user and client access requirements based on the information provided in
the case study.
• Review and confirm data in ‘Table3’ is encrypted according to user and client access
requirements.
• Review and confirm authorisation and authentications are active for ‘Table3’ according to the
user and client access requirements by following the below practices:
o Use IAM roles to authenticate access to DynamoDB
o Use IAM policies for DynamoDB based authorisation
o Use IAM policy conditions for fine-grained access control
o Use a VPC endpoint and policies to access DynamoDB
o Consider client-side encryption
Review and confirm data is encrypted ad according to user and client access requirements
Examine data encryption methods to make sure they adhere to user and client access specifications.
Verify that the right access controls, key management procedures, and encryption protocols are being
used. Maintain regular evaluation and verification of security procedures to ensure data security and
compliance with regulatory requirements as well as organizational rules.
Use IAM policies to implement authorization based on DynamoDB. Create policies that outline what
can be done with DynamoDB resources. To ensure precise control over access permissions based on
predetermined conditions, attach these rules to IAM users, groups, or roles.
Use IAM policy conditions to give AWS users fine-grained access control. To impose certain access
requirements, you can define criteria based on resource attributes, time, or IP address. This gives
you fine-grained control over permissions in your AWS environment.
Use a VPC endpoint for DynamoDB inside your Amazon VPC to improve security. In order to manage
access and guarantee fine-grained permissions, implement IAM policies. With this configuration, your
VPC and DynamoDB can communicate securely and privately without being visible to the internet.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to test and fix the data persistence process according to business
requirements and document the outcomes using ‘Template 29’.
This activity requires you to test and fix the data persistence process according to business
requirements based on the information provided in the case study.
• Test and fix the below data persistence processes by using DynamoDB with AWS:
o Data conversion
o Data migration
o Consistency model
o Security – encryption
o Network security – VPC endpoint for DynamoDB
o Performance – throughput and auto-scaling
o Required performance – microseconds versus milliseconds
Data conversion
Verify mapping, assure data integrity, and conduct tests using a variety of data samples. Perform
error handling, maximize speed, and look for missing data. Make a data backup and record the
procedure. Iteratively solve problems, retesting following each modification. For conversion, constant
monitoring and updates provide a dependable and effective data persistence procedure.
Data migration
Verify the mapping, experiment with different data sets, and guarantee data integrity while
migrating. Optimize performance, create a reliable error handling system, and backup your data.
Keep track of the procedure and use iterative problem-solving. Ongoing observation ensures a
dependable and effective data migration procedure.
Consistency model
Verify the data consistency model, conduct a variety of scenario tests, and guarantee integrity. Put
error handling into practice, boost performance, and make data backups. Keep track of the procedure
and solve problems in iterations. Constant observation preserves the integrity of the consistency
model by guaranteeing a dependable and consistent data persistence method.
Security – encryption
Check for security precautions, validate encryption implementation, and test with a variety of data
sets. Optimize encryption performance, implement strong error handling, and backup encrypted data.
Iteratively address security vulnerabilities and document the process. Strong encryption methods and
ongoing monitoring guarantee a safe and effective data persistence process.
Satisfactory
Feedback to student:
Student signature
Observer signature
This part of the activity requires you to document and finalise work according to business requirements
and document the outcomes using ‘Template 30’.
This activity requires you to document and finalise work according to business requirements based on
the information provided in the case study.
• Finalise the work using the below elements and document using Template 30:
o Include the scope of the project
o Record every resource that is being used during the project
o Mention the people involved in the project
o Merging all the contents
• Document work by following the below practices:
o Priorities security
o Record and monitor the documents
o Establish a record management plan
o Review periodically
o Dispose or destroy records at the end of their lifecycle
o Ensure that the data is accurate
o Digitise physical records
Welcome to the documentation for our NoSQL data store! This all-inclusive manual prepares you
to perform essential tasks: formulating queries to create, update, and remove various kinds of
data. In order to improve data efficiency, also investigate the development of two strategic
indexes, comprehend the significance of defining partition and sort keys, and learn about
optimization strategies. This handbook offers developers and administrators a comprehensive
scope to efficiently handle key tasks and streamline data management.
Install a code editor, the official client or SDK for the selected NoSQL database (MongoDB,
DynamoDB, etc.), and the database itself. Create a setting for development. Recognize and
prepare for sort and partition keys. Install index creation and management tools, if necessary.
Acquire login credentials. Consult official documents for rules and best practices.
Use the SDK or NoSQL database client to communicate with the datastore. Making queries for
adding, removing, and altering data types is one example. Create indexes to facilitate effective
querying. According to the specifications of the data model, specify the partition and sort keys. To
achieve better performance, optimize your data storage.
Describe the architecture of NoSQL databases with an emphasis on performance and scalability.
Determine essential elements, like databases and indexes. Provide partition and sort keys in your
data models to ensure effective storage and retrieval. For the best possible handling of read and
write operations, take into account the overall architecture of the system.
Investigate common problems by looking through error logs and messages. FAQs respond to
often asked questions about data optimization, index building, and query execution. To obtain
complete troubleshooting methods and supplementary FAQs, refer to the extensive
documentation or contact the NoSQL database community and forums.
Satisfactory
Feedback to student:
Student signature
Observer signature
Feedback:
Second attempt:
Student I declare that the answers I have provided are my own work. Where I
Declaration
have accessed information from other sources, I have provided
references and or links to my sources.
I have kept a copy of all relevant notes and reference material that I
used as part of my submission.
I have provided references for all sources where the information is not
my own. I understand the consequences of falsifying documentation and
plagiarism. I understand how the assessment is structured. I accept that
the work I submit may be subject to verification to establish that it is
my own.
I understand that if I disagree with the assessment outcome, I can
appeal the assessment process, and either re-submit additional evidence
Student Signature
Date
Trainer/Assessor
Name
Trainer/Assessor I hold:
Declaration
Vocational competencies at least to the level being delivered
Current relevant industry skills
Current knowledge and skills in VET, and undertake
Ongoing professional development in VET
Trainer/Assessor
Signature
Date
Office Use Only Outcome of Assessment has been entered into the Student Management
System