The Problems of Monolithic Data Architecture
Copyright © 2020 Oracle and/or its affiliates.
People Process
Technology
• Business units may have few
incentives work across boundaries
• Hyper-specialization in tech teams
narrow the focus on technology
rather than outcomes or solutions
• Pressure on all stakeholders to
produce value, but org structures
still a carry-over from older EDW
projects
• The classical Lambda/Kappa
“Ingest -> Process -> Serve” design
institutionalizes Batch Processing
into the team processes
• Conceptually it is still “Extract -
> Transform -> Load” but with
other words/syntax
• The monolithic data lake is big and
slow by design, not by accident
• Architecture decomposition
happens at several layers, but Big
Data is inarguably “storage centric”
• From HDFS (Hadoop) to Object
Storage (Cloud) the classical
approach is “the Lake” as a physical
area where we pile up data
• But data is not static, data is in a
constant state of dynamic
equilibrium
Images: https://martinfowler.com/articles/data-monolith-to-mesh.html
2
评论