Data Warehouse
Data Warehouse
Data Warehouse
Extract, transform, load (ETL) and extract, load, The basic architecture of a data warehouse
transform (ELT) are the two main approaches
used to build a data warehouse system.
The main source of the data is cleansed, transformed, catalogued, and made available for use by managers
and other business professionals for data mining, online analytical processing, market research and decision
support.[7] However, the means to retrieve and analyze data, to extract, transform, and load data, and to
manage the data dictionary are also considered essential components of a data warehousing system. Many
references to data warehousing use this broader context. Thus, an expanded definition of data warehousing
includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to
manage and retrieve metadata.
Benefits
A data warehouse maintains a copy of information from the source transaction systems. This architectural
complexity provides the opportunity to:
Integrate data from multiple sources into a single database and data model. More
congregation of data to single database so a single query engine can be used to present
data in an ODS.
Mitigate the problem of database isolation level lock contention in transaction processing
systems caused by attempts to run large, long-running analysis queries in transaction
processing databases.
Maintain data history, even if the source transaction systems do not.
Integrate data from multiple source systems, enabling a central view across the enterprise.
This benefit is always valuable, but particularly so when the organization has grown by
merger.
Improve data quality, by providing consistent codes and descriptions, flagging or even fixing
bad data.
Present the organization's information consistently.
Provide a single common data model for all data of interest regardless of the data's source.
Restructure the data so that it makes sense to the business users.
Restructure the data so that it delivers excellent query performance, even for complex
analytic queries, without impacting the operational systems.
Add value to operational business applications, notably customer relationship management
(CRM) systems.
Make decision–support queries easier to write.
Organize and disambiguate repetitive data.
Generic
The environment for data warehouses and marts includes the following:
In regards to source systems listed above, R. Kelly Rainer states, "A common source for the data in data
warehouses is the company's operational databases, which can be relational databases".[8]
Regarding data integration, Rainer states, "It is necessary to extract data from source systems, transform
them, and load them into a data mart or warehouse".[8]
Metadata is data about data. "IT personnel need information about data sources; database, table, and
column names; refresh schedules; and data usage measures".[8]
Today, the most successful companies are those that can respond quickly and flexibly to market changes
and opportunities. A key to this response is the effective and efficient use of data and information by
analysts and managers.[8] A "data warehouse" is a repository of historical data that is organized by the
subject to support decision-makers in the organization.[8] Once data is stored in a data mart or warehouse, it
can be accessed.
Types of data marts include dependent, independent, and hybrid data marts.
Online analytical processing (OLAP) is characterized by a relatively low volume of transactions. Queries
are often very complex and involve aggregations. For OLAP systems, response time is an effective
measure. OLAP applications are widely used by Data Mining techniques. OLAP databases store
aggregated, historical data in multi-dimensional schemas (usually star schemas). OLAP systems typically
have a data latency of a few hours, as opposed to data marts, where latency is expected to be closer to one
day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives.
The three basic operations in OLAP are Roll-up (Consolidation), Drill-down, and Slicing & Dicing.
Online transaction processing (OLTP) is characterized by a large number of short on-line transactions
(INSERT, UPDATE, DELETE). OLTP systems emphasize very fast query processing and maintaining
data integrity in multi-access environments. For OLTP systems, effectiveness is measured by the number of
transactions per second. OLTP databases contain detailed and current data. The schema used to store
transactional databases is the entity model (usually 3NF).[10] Normalization is the norm for data modeling
techniques in this system.
Predictive analytics is about finding and quantifying hidden patterns in the data using complex
mathematical models that can be used to predict future outcomes. Predictive analysis is different from
OLAP in that OLAP focuses on historical data analysis and is reactive in nature, while predictive analysis
focuses on the future. These systems are also used for customer relationship management (CRM).
History
The concept of data warehousing dates back to the late 1980s[11] when IBM researchers Barry Devlin and
Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was
intended to provide an architectural model for the flow of data from operational systems to decision support
environments. The concept attempted to address the various problems associated with this flow, mainly the
high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of
redundancy was required to support multiple decision support environments. In larger corporations, it was
typical for multiple decision support environments to operate independently. Though each environment
served different users, they often required much of the same stored data. The process of gathering, cleaning
and integrating data from various sources, usually from long-term existing operational systems (usually
referred to as legacy systems), was typically in part replicated for each environment. Moreover, the
operational systems were frequently reexamined as new decision support requirements emerged. Often new
requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored
for ready access by users.
Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea
of managing and putting a dollar value on an organization's data resources and then reporting that value as
an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area
databases from data derived from transaction-driven systems to create a storage area where summary data
could be further leveraged to inform executive decision-making. This concept served to promote further
thinking of how a data warehouse could be developed and managed in a practical way within any
enterprise.
1960s – General Mills and Dartmouth College, in a joint research project, develop the terms
dimensions and facts.[12]
1970s – ACNielsen and IRI provide dimensional data marts for retail sales.[12]
1970s – Bill Inmon begins to define and discuss the term Data Warehouse.[13]
1975 – Sperry Univac introduces MAPPER (MAintain, Prepare, and Produce Executive
Reports), a database management and reporting system that includes the world's first 4GL. It
is the first platform designed for building Information Centers (a forerunner of contemporary
data warehouse technology).
1983 – Teradata introduces the DBC/1012 database computer specifically designed for
decision support.[14]
1984 – Metaphor Computer Systems, founded by David Liddle and Don Massaro, releases
a hardware/software package and GUI for business users to create a database management
and analytic system.
1988 – Barry Devlin and Paul Murphy publish the article "An architecture for a business and
information system" where they introduce the term "business data warehouse".[15]
1990 – Red Brick Systems, founded by Ralph Kimball, introduces Red Brick Warehouse, a
database management system specifically for data warehousing.
1991 - James M. Kerr authors The IRM Imperative, which suggests data resources could be
reported as an asset on a balance sheet, furthering commercial interest in the establishment
of data warehouses.
1991 – Prism Solutions, founded by Bill Inmon, introduces Prism Warehouse Manager,
software for developing a data warehouse.
1992 – Bill Inmon publishes the book Building the Data Warehouse.[16]
1995 – The Data Warehousing Institute, a for-profit organization that promotes data
warehousing, is founded.
1996 – Ralph Kimball publishes the book The Data Warehouse Toolkit.[17]
1998 – Focal modeling is implemented as an ensemble (hybrid) data warehouse modeling
approach, with Patrik Lager as one of the main drivers.[18][19]
2000 – Dan Linstedt releases in the public domain the Data vault modeling, conceived in
1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data
coming in from multiple operational systems, with emphasis on tracing, auditing and
resilience to change of the source data model.
2008 – Bill Inmon, along with Derek Strauss and Genia Neushloss, publishes "DW 2.0: The
Architecture for the Next Generation of Data Warehousing", explaining his top-down
approach to data warehousing and coining the term, data-warehousing 2.0.
2008 – Anchor modeling was formalized in a paper presented at the International
Conference on Conceptual Modeling, and won the best paper award[20]
2012 – Bill Inmon develops and makes public technology known as "textual
disambiguation". Textual disambiguation applies context to raw text and reformats the raw
text and context into a standard data base format. Once raw text is passed through textual
disambiguation, it can easily and efficiently be accessed and analyzed by standard
business intelligence technology. Textual disambiguation is accomplished through the
execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such
as in documents, Hadoop, email, and so forth.
2013 – Data vault 2.0 was released,[21][22] having some minor changes to the modeling
method, as well as integration with best practices from other methodologies, architectures
and implementations including agile and CMMI principles
Information storage
Facts
A fact is a value, or measurement, which represents a fact about the managed entity or system.
Facts, as reported by the reporting entity, are said to be at raw level; e.g., in a mobile telephone system, if a
BTS (base transceiver station) receives 1,000 requests for traffic channel allocation, allocates for 820, and
rejects the remaining, it would report three facts or measurements to a management system:
tch_req_total = 1000
tch_req_success = 820
tch_req_fail = 180
Facts at the raw level are further aggregated to higher levels in various dimensions to extract more service
or business-relevant information from it. These are called aggregates or summaries or aggregated facts.
For instance, if there are three BTS in a city, then the facts above can be aggregated from the BTS to the
city level in the network dimension. For example:
tch_req_success_city = tch_req_success_bts1 +
tch_req_success_bts2 + tch_req_success_bts3
avg_tch_req_success_city = (tch_req_success_bts1 +
tch_req_success_bts2 + tch_req_success_bts3) / 3
There are three or more leading approaches to storing data in a data warehouse – the most important
approaches are the dimensional approach and the normalized approach.
The dimensional approach refers to Ralph Kimball's approach in which it is stated that the data warehouse
should be modeled using a Dimensional Model/star schema. The normalized approach, also called the 3NF
model (Third Normal Form), refers to Bill Inmon's approach in which it is stated that the data warehouse
should be modeled using an E-R model/normalized model.[23]
Dimensional approach
In a dimensional approach, transaction data is partitioned into "facts", which are generally numeric
transaction data, and "dimensions", which are the reference information that gives context to the facts. For
example, a sales transaction can be broken up into facts such as the number of products ordered and the
total price paid for the products, and into dimensions such as order date, customer name, product number,
order ship-to and bill-to locations, and salesperson responsible for receiving the order.
A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand
and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly.[17]
Dimensional structures are easy to understand for business users, because the structure is divided into
measurements/facts and context/dimensions. Facts are related to the organization's business processes and
operational system whereas the dimensions surrounding them contain context about the measurement
(Kimball, Ralph 2008). Another advantage offered by dimensional model is that it does not involve a
relational database every time. Thus, this type of modeling technique is very useful for end-user queries in
data warehouse.
The model of facts and dimensions can also be understood as a data cube.[24] Where the dimensions are the
categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates.
1. To maintain the integrity of facts and dimensions, loading the data warehouse with data from
different operational systems is complicated.
2. It is difficult to modify the data warehouse structure if the organization adopting the
dimensional approach changes the way in which it does business.
Normalized approach
In the normalized approach, the data in the data warehouse are stored following, to a degree, database
normalization rules. Tables are grouped together by subject areas that reflect general data categories (e.g.,
data on customers, products, finance, etc.). The normalized structure divides data into entities, which
creates several tables in a relational database. When applied in large enterprises the result is dozens of tables
that are linked together by a web of joins. Furthermore, each of the created entities is converted into
separate physical tables when the database is implemented (Kimball, Ralph 2008). The main advantage of
this approach is that it is straightforward to add information into the database. Some disadvantages of this
approach are that, because of the number of tables involved, it can be difficult for users to join data from
different sources into meaningful information and to access the information without a precise understanding
of the sources of data and of the data structure of the data warehouse.
Both normalized and dimensional models can be represented in entity–relationship diagrams as both
contain joined relational tables. The difference between the two models is the degree of normalization (also
known as Normal Forms). These approaches are not mutually exclusive, and there are other approaches.
Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008).
In Information-Driven Business,[25] Robert Hillard proposes an approach to comparing the two approaches
based on the information needs of the business problem. The technique shows that normalized models hold
far more information than their dimensional equivalents (even when the same fields are used in both
models) but this extra information comes at the cost of usability. The technique measures information
quantity in terms of information entropy and usability in terms of the Small Worlds data transformation
measure.[26]
Design methods
Bottom-up design
In the bottom-up approach, data marts are first created to provide reporting and analytical capabilities for
specific business processes. These data marts can then be integrated to create a comprehensive data
warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection
of conformed dimensions and conformed facts, which are dimensions that are shared (in a specific way)
between facts in two or more data marts.[27]
Top-down design
The top-down approach is designed using a normalized enterprise data model. "Atomic" data, that is, data
at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data
needed for specific business processes or specific departments are created from the data warehouse.[28]
Hybrid design
Data warehouses often resemble the hub and spokes architecture. Legacy systems feeding the warehouse
often include customer relationship management and enterprise resource planning, generating large
amounts of data. To consolidate these various data models, and facilitate the extract transform load process,
data warehouses often make use of an operational data store, the information from which is parsed into the
actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way.
Data marts for specific reports can then be built on top of the data warehouse.
A hybrid (also called ensemble) data warehouse database is kept on third normal form to eliminate data
redundancy. A normal relational database, however, is not efficient for business intelligence reports where
dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse
and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides
a single source of information from which the data marts can read, providing a wide range of business
information. The hybrid architecture allows a data warehouse to be replaced with a master data
management repository where operational (not static) information could reside.
The data vault modeling components follow hub and spokes architecture. This modeling style is a hybrid
design, consisting of the best practices from both third normal form and star schema. The data vault model
is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom
up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user
accessible, which, when built, still requires the use of a data mart or star schema-based release area for
business purposes.
Subject-oriented
Unlike the operational systems, the data in the data warehouse revolves around the subjects of the
enterprise. Subject orientation is not database normalization. Subject orientation can be really useful for
decision-making. Gathering the required objects is called subject-oriented.
Integrated
The data found within the data warehouse is integrated. Since it comes from several operational systems, all
inconsistencies must be removed. Consistencies include naming conventions, measurement of variables,
encoding structures, physical attributes of data, and so forth.
Time-variant
While operational systems reflect current values as they support day-to-day operations, data warehouse data
represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly
meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific
customer, the user needs to look at data on the current and past purchases.)[29]
Nonvolatile
The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless
there is a regulatory or statutory obligation to do so).[30]
Aggregation
In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The
user may start looking at the total sale units of a product in an entire region. Then the user looks at the states
in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the
analysis starts at a higher level and drills down to lower levels of details.[29]
Virtualization
With data virtualization, the data used remains in its original locations and real-time access is established to
allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some
technical difficulties such as compatibility problems when combining data from various platforms, lowering
the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding
the creation of a new database containing personal information can make it easier to comply with privacy
regulations. However, with data virtualization, the connection to all necessary data sources must be
operational as there is no local copy of the data, which is one of the main drawbacks of the approach.[31]
Data warehouses are optimized for analytic access patterns. Analytic access patterns generally involve
selecting specific fields and rarely if ever select *, which selects all fields/columns, as is more common
in operational databases. Because of these differences in access patterns, operational databases (loosely,
OLTP) benefit from the use of a row-oriented DBMS whereas analytics databases (loosely, OLAP) benefit
from the use of a column-oriented DBMS. Unlike operational systems which maintain a snapshot of the
business, data warehouses generally maintain an infinite history which is implemented through ETL
processes that periodically migrate data from the operational systems over to the data warehouse.
See also
List of business intelligence software
Data mesh, a domain-oriented data architecture paradigm for managing big data
Marketing.xml, a standard used for importing marketing data into a data warehouse (2010)
Virtual Database Manager, represents non-relational data in a virtual data warehouse
References
1. Dedić, Nedim; Stanier, Clare (2016). Hammoudi, Slimane; Maciaszek, Leszek; Missikoff,
Michele M. Missikoff; Camp, Olivier; Cordeiro, José (eds.). An Evaluation of the Challenges
of Multilingualism in Data Warehouse Development (http://eprints.staffs.ac.uk/2770/).
International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy
(https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf) (PDF). Proceedings of the
18th International Conference on Enterprise Information Systems (ICEIS 2016). Vol. 1.
SciTePress. pp. 196–206. doi:10.5220/0005858401960206 (https://doi.org/10.5220%2F000
5858401960206). ISBN 978-989-758-187-8. Archived (https://web.archive.org/web/2018052
2180940/https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf) (PDF) from the
original on 2018-05-22.
2. "9 Reasons Data Warehouse Projects Fail" (https://blog.rjmetrics.com/2014/12/04/10-comm
on-mistakes-when-building-a-data-warehouse/). blog.rjmetrics.com. 4 December 2014.
Retrieved 2017-04-30.
3. "Exploring Data Warehouses and Data Quality" (https://web.archive.org/web/201807260718
09/https://spotlessdata.com/blog/exploring-data-warehouses-and-data-quality).
spotlessdata.com. Archived from the original (https://spotlessdata.com/blog/exploring-data-w
arehouses-and-data-quality) on 2018-07-26. Retrieved 2017-04-30.
4. "What is a Data Warehouse? | Key Concepts | Amazon Web Services" (https://aws.amazon.
com/data-warehouse/). Amazon Web Services, Inc. Retrieved 2023-02-13.
5. "What is Big Data?" (https://web.archive.org/web/20170217144032/https://spotlessdata.com/
what-big-data). spotlessdata.com. Archived from the original (https://spotlessdata.com/what-
big-data) on 2017-02-17. Retrieved 2017-04-30.
6. Patil, Preeti S.; Srikantha Rao; Suryakant B. Patil (2011). "Optimization of Data
Warehousing System: Simplification in Reporting and Analysis" (http://www.ijcaonline.org/pr
oceedings/icwet/number9/2131-db195). IJCA Proceedings on International Conference and
Workshop on Emerging Trends in Technology (ICWET). Foundation of Computer Science. 9
(6): 33–37.
7. Marakas & O'Brien 2009
8. Rainer, R. Kelly; Cegielski, Casey G. (2012-05-01). Introduction to Information Systems:
Enabling and Transforming Business, 4th Edition (https://archive.org/details/introductiontoin
00rain_274) (Kindle ed.). Wiley. pp. 127 (https://archive.org/details/introductiontoin00rain_27
4/page/n138), 128, 130, 131, 133. ISBN 978-1118129401.
9. "Data Mart Concepts" (http://docs.oracle.com/html/E10312_01/dm_concepts.htm). Oracle.
2007.
10. "OLTP vs. OLAP" (http://datawarehouse4u.info/OLTP-vs-OLAP.html).
Datawarehouse4u.Info. 2009. "We can divide IT systems into transactional (OLTP) and
analytical (OLAP). In general, we can assume that OLTP systems provide source data to
data warehouses, whereas OLAP systems help to analyze it."
11. "The Story So Far" (https://web.archive.org/web/20080708182105/http://www.computerworl
d.com/databasetopics/data/story/0%2C10801%2C70102%2C00.html). 2002-04-15.
Archived from the original (http://www.computerworld.com/databasetopics/data/story/0,1080
1,70102,00.html) on 2008-07-08. Retrieved 2008-09-21.
12. Kimball 2013, pg. 15
13. "The audit of the Data Warehouse Framework" (http://ceur-ws.org/Vol-19/paper14.pdf)
(PDF). Archived (https://web.archive.org/web/20120512064024/http://ceur-ws.org/Vol-19/pap
er14.pdf) (PDF) from the original on 2012-05-12.
14. Paul Gillin (February 20, 1984). "Will Teradata revive a market?" (https://books.google.com/b
ooks?id=5pw6ePUC8YYC&pg=PA48). Computer World. pp. 43, 48. Retrieved 2017-03-13.
15. Devlin, B. A.; Murphy, P. T. (1988). "An architecture for a business and information system".
IBM Systems Journal. 27: 60–80. doi:10.1147/sj.271.0060 (https://doi.org/10.1147%2Fsj.27
1.0060).
16. Inmon, Bill (1992). Building the Data Warehouse (https://archive.org/details/buildingdatawar
e00inmo_1). Wiley. ISBN 0-471-56960-7.
17. Kimball, Ralph (2011). The Data Warehouse Toolkit. Wiley. p. 237. ISBN 978-0-470-14977-
5.
18. Introduction to the focal framework (https://topofminds.se/wp/wp-content/uploads/Focal-Intro
duction-to-Focal-implementation.pdf)
19. Data Modeling Meetup Munich: An Introduction to Focal with Patrik Lager - YouTube (https://
www.youtube.com/watch?v=C2y92n0sPok)
20. Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia (2009).
"Anchor Modeling". Proceedings of the 28th International Conference on Conceptual
Modeling. ER '09. Gramado, Brazil: Springer-Verlag: 234–250. ISBN 978-3-642-04839-5.
21. A short intro to #datavault 2.0
22. Data Vault 2.0 Being Announced
23. Golfarelli, Matteo; Maio, Dario; Rizzi, Stefano (1998-06-01). "The dimensional fact model: a
conceptual model for data warehouses" (https://www.worldscientific.com/doi/abs/10.1142/S0
218843098000118). International Journal of Cooperative Information Systems. 07 (2n03):
215–247. doi:10.1142/S0218843098000118 (https://doi.org/10.1142%2FS02188430980001
18). ISSN 0218-8430 (https://www.worldcat.org/issn/0218-8430).
24. "Introduction to Data Cubes" (http://www2.cs.uregina.ca/~dbd/cs831/notes/dcubes/dcubes.ht
ml).
25. Hillard, Robert (2010). Information-Driven Business. Wiley. ISBN 978-0-470-62577-4.
26. "Information Theory & Business Intelligence Strategy - Small Worlds Data Transformation
Measure - MIKE2.0, the open source methodology for Information Development" (http://mike
2.openmethodology.org/wiki/Small_Worlds_Data_Transformation_Measure).
Mike2.openmethodology.org. Retrieved 2013-06-14.
27. "The Bottom-Up Misnomer - DecisionWorks Consulting" (http://decisionworks.com/2003/09/t
he-bottom-up-misnomer/). DecisionWorks Consulting. 17 September 2003. Retrieved
2016-03-06.
28. Gartner, Of Data Warehouses, Operational Data Stores, Data Marts and Data Outhouses,
Dec 2005
29. Paulraj., Ponniah (2010). Data warehousing fundamentals for IT professionals. Ponniah,
Paulraj. (2nd ed.). Hoboken, N.J.: John Wiley & Sons. ISBN 9780470462072.
OCLC 662453070 (https://www.worldcat.org/oclc/662453070).
30. H., Inmon, William (2005). Building the data warehouse (4th ed.). Indianapolis, IN: Wiley
Pub. ISBN 9780764599446. OCLC 61762085 (https://www.worldcat.org/oclc/61762085).
31. Paiho, Satu; Tuominen, Pekka; Rökman, Jyri; Ylikerälä, Markus; Pajula, Juha; Siikavirta,
Hanne (2022). "Opportunities of collected city data for smart cities" (https://doi.org/10.1049/s
mc2.12044). IET Smart Cities. 4 (4): 275–291. doi:10.1049/smc2.12044 (https://doi.org/10.10
49%2Fsmc2.12044). S2CID 253467923 (https://api.semanticscholar.org/CorpusID:2534679
23).
32. Gupta, Satinder Bal; Mittal, Aditya (2009). Introduction to Database Management System (htt
ps://books.google.com/books?id=fyQTae6c9l4C). Laxmi Publications.
ISBN 9788131807248.
33. "Data Warehouse" (http://www.tech-faq.com/data-warehouse.html). 6 April 2019.
Further reading
Davenport, Thomas H. and Harris, Jeanne G. Competing on Analytics: The New Science of
Winning (2007) Harvard Business School Press. ISBN 978-1-4221-0332-6
Ganczarski, Joe. Data Warehouse Implementations: Critical Implementation Factors Study
(2009) VDM Verlag ISBN 3-639-18589-7 ISBN 978-3-639-18589-8
Kimball, Ralph and Ross, Margy. The Data Warehouse Toolkit Third Edition (2013) Wiley,
ISBN 978-1-118-53080-1
Linstedt, Graziano, Hultgren. The Business of Data Vault Modeling Second Edition (2010)
Dan linstedt, ISBN 978-1-4357-1914-9
William Inmon. Building the Data Warehouse (2005) John Wiley and Sons, ISBN 978-81-
265-0645-3