Googlier.com News Article Search
Googlier.com Precious Metals Exchange Smart Contracts
Googlier.com Document Services
| Latest Tech Trends, Their Problems, And How to Solve Them |
Few IT professionals are unaware of the rapid emergence of 5G, Internet of Things or IoT, edge-fog-cloud or core computing, microservices, and artificial intelligence known as machine learning or AI/ML. These new technologies hold enormous promise for transforming IT and the customer experience with the problems that they solve. It’s important to realize that like all technologies, they introduce new processes and subsequently new problems. Most are aware of the promise, but few are aware of the new problems and how to solve them.
5G is a great example. It delivers 10 to 100 times more throughput than 4G LTE and up to 90% lower latencies. Users can expect throughput between 1 and 10Gbps with latencies at approximately 1 ms. This enables large files such as 4K or 8K videos to be downloaded or uploaded in seconds not minutes. 5G will deliver mobile broadband and can potentially make traditional broadband obsolete just as mobile telephony has essentially eliminated the vast majority of landlines.
5G mobile networking technology makes industrial IoT more scalable, simpler, and much more economically feasible. Whereas 4G is limited to approximately 400 devices per Km2, 5G increases that number of devices supported per Km2 to approximately 1,000,000 or a 250,000% increase. The performance, latency, and scalability are why 5G is being called transformational. But there are significant issues introduced by 5G. A key one is the database application infrastructure.
Analysts frequently cite the non-trivial multi-billion dollar investment required to roll-out 5G. That investment is primarily focused on the antennas and fiber optic cables to the antennas. This is because 5G is based on a completely different technology than 4G. It utilizes millimeter waves instead of microwaves. Millimeter waves are limited to 300 meters between antennas. The 4G microwaves can be as far as 16 Km apart. That is a major difference and therefore demands many more antennas and optical cables to those antennas to make 5G work effectively. It also means it will take considerable time before rural areas are covered by 5G and even then, it will be a degraded 5G.
The 5G infrastructure investment not being addressed is the database application infrastructure. The database is a foundational technology for analytics. IT Pros simply assume it will be there for their applications and microservices. Everything today is interconnected. The database application infrastructure is generally architected for the volume and performance coming from the network. That volume and performance is going up by an order of magnitude. What happens when the database application infrastructure is not upgraded to match? The actual user performance improves marginally or not at all. It can in fact degrade as volumes overwhelm the database applications not prepared for them. Both consumers and business users become frustrated. 5G devices cost approximately 30% more than 4G – mostly because those devices need both a 5G and 4G modem (different non-compatible technologies). The 5G network costs approximately 25% more than 4G. It is understandable that anyone would be frustrated when they are spending considerably more and seeing limited improvement, no improvement, or negative improvement. The database application infrastructure becomes the bottleneck. When consumers and business users become frustrated, they go somewhere else, another website, another supplier, or another partner. Business will be lost.
Fortunately, there is still time as the 5G rollout is just starting with momentum building in 2020 with complete implementations not expected until 2022, at the earliest. However, IT organizations need to start planning their application infrastructure upgrades to match the 5G rollout or may end up suffering the consequences.
IoT is another technology that promises to be transformative. It pushes intelligence to the edge of the network enabling automation that was previously unthinkable. Smarter homes, smarter cars, smarter grids, smarter healthcare, smarter fitness, smarter water management, and more. IoT has the potential to radically increase efficiencies and reduce waste. Most of the implementations to date have been in consumer homes and offices. These implementations rely on the WiFi in the building they reside.
The industrial implementations have been not as successful…yet. Per Gartner, 65 to 85% of Industrial IoT to date have been stuck in pilot mode with 28% of those for more than 2 years. There are three key reasons for this. The first are the limitations of 4G of 400 devices per Km2. This limitation will be fixed as 5G rolls out. The second is the same issue as 5G, database application infrastructure not suited for the volume and performance required by industrial IoT. And the third is latency from the IoT edge devices to the analytics, either in the on-premises data center (core), or cloud. Speed of light latency is a major limiting factor for real-time analytics and real-time actionable information. This has led to the very rapid rise of edge-fog-cloud or core computing.
Moving analytic processing out to the edge or fog significantly reduces distance latency between where the data is being collected and where it is being analyzed. This is crucial for applications such as autonomous vehicles. The application must make decisions in milliseconds not seconds. It may have to decide whether a shadow in the road is actually a shadow, a reflection, a person, or a dangerous hazard to be avoided. The application must make that decision immediately and cannot wait. By pushing the application closer to the data collection, it can make that decision in the timely manner that’s required. Smart grids, smart cities, smart water management, smart traffic management, are all examples requiring fog (near the edge) or edge computing analytics. This solves the problem of distance latency; however, it does not resolve analytical latency. Edge and fog computing typically lack the resources to provide ultra-fast database analytics. This has led to the deployment of microservices.
Microservices have become very popular over the past 24 months. They tightly couple a database application with its database that has been extremely streamlined to do only the few things the microservice requires. The database may be a neutered relational, time series, key value, JSON, XML, object, and more. The database application and its database are inextricably linked. The combined microservice is then pushed down to the edge or fog compute device and its storage. Microservices have no access to any other microservices data or database. If it needs access to another microservice data element, it’s going to be difficult and manually labor-intensive. Each of the microservices must be reworked to grant that access, or the data must be copied and moved via an extract transfer and load (ETL) process, or the data must be duplicated in ongoing manner. Each of these options are laborious, albeit manageable, for a handful of microservices. But what about hundreds or thousands of microservices, which is where it’s headed? This sprawl becomes unmanageable and ultimately, unsustainable, even with AI/ML.
AI/ML is clearly a hot tech trend today. It’s showing up everywhere in many applications. This is because standard CPU processing power is now powerful enough to run AI / machine learning algorithms. AI/ML is showing up typically in one of two different variations. The first has a defined specific purpose. It is utilized by the vendor to automate a manual task requiring some expertise. An example of this is in enterprise storage. The AI/ML is tasked with placing data based on performance, latency, and data protection policies and parameters determined by the administrator. It then matches that to the hardware configuration. If performance should fall outside of the desired parameters AI/ML looks to correct the situation without human intervention. It learns from experience and automatically makes changes to accomplish the required performance and latency. The second AI/ML is a tool kit that enables IT pros to create their own algorithms.
The 1st is an application of AI/ML. It obviously cannot be utilized outside the tasks it was designed to do. The 2nd is a series of tools that require considerable knowledge, skill, and expertise to be able to utilize. It is not an application. It merely enables applications to be developed that take advantage of the AI/ML engine. This requires a very steep learning curve.
Oracle is the first vendor to solve each and every one of these tech trend problems. The Oracle Exadata X8M and Oracle Database Appliance (ODA) X8 are uniquely suited to solve the 5G and IoT application database infrastructure problem, the edge-fog-core microservices problem, and the AI/ML usability problem.
It starts with the co-engineering. The compute, memory, storage, interconnect, networking, operating system, hypervisor, middleware, and the Oracle 19c Database are all co-engineered together. Few vendors have complete engineering teams for every layer of the software and hardware stacks to do the same thing. And those who do, have shown zero inclination to take on the intensive co-engineering required. Oracle Exadata alone has 60 exclusive database features not found in any other database system including others running the same Oracle Database. Take for example Automatic Indexing. It occurs multiple orders of magnitude faster than the most skilled database administrator (DBA) and delivers noticeably superior performance. Another example is data ingest. Extensive parallelism is built-into every Exadata providing unmatched data ingest. And keep in mind, the Oracle Autonomous Database is utilizing the exact same Exadata Database Machine. The results of that co-engineering deliver unprecedented Database application latency reduction, response time reduction, and performance increases. This enables the application Database infrastructure to match and be prepared for the volume and performance of 5G and IoT.
The ODA X8 is ideal for edge or fog computing coming in at approximately 36% lower total cost of ownership (TCO) over 3 years than commodity white box servers running databases. It’s designed to be a plug and play Oracle Database turnkey appliance. It runs the Database application too. Nothing is simpler and no white box server can match its performance.
The Oracle Exadata X8M is even better for the core or fog computing where it’s performance, scalability, availability and capability are simply unmatched by any other database system. It too is architected to be exceedingly simple to implement, operate, and manage.
The combination of the two working in conjunction in the edge-fog-core makes the application database latency problems go away. They even solve the microservices problems. Each Oracle Exadata X8M and ODA X8 provide pluggable databases (PDBs). Each PDB is its own unique database working off the same stored data in the container database (CDB). Each PDB can be the same or different type of Oracle Database including OLTP, data warehousing, time series, object, JSON, key value, graphical, spatial, XML, even document database mining. The PDBs are working on virtual copies of the data. There is no data duplication. There are no ETLs. There is no data movement. There are no data islands. There are no runaway database licenses and database hardware sprawl. Data does not go stale before it can be analyzed. Any data that needs to be accessed by a particular or multiple PDBs can be easily configured to do so. Edge-fog-core computing is solved. If the core needs to be in a public cloud, Oracle solves that problem as well with the Oracle Autonomous Database providing the same capabilities of Exadata and more.
That leaves the AI/ML usability problem. Oracle solves that one too. Both Oracle Engineered systems and the Oracle Autonomous Database have AI/ML engineered inside from the onset. Not just a tool-kit on the side. Oracle AI/ML comes with pre-built, documented, and production-hardened algorithms in the Oracle Autonomous Database cloud service. DBAs do not have to be data scientists to develop AI/ML applications. They can simply utilize the extensive Oracle library of AI/ML algorithms in Classification, Clustering, Time Series, Anomaly Detection, SQL Analytics, Regression, Attribute Importance, Association Rules, Feature Extraction, Text Mining Support, R Packages, Statistical Functions, Predictive Queries, and Exportable ML Models. It’s as simple as selecting the algorithms to be used and using them. That’s it. No algorithms to create, test, document, QA, patch, and more.
Taking advantage of AI/ML is as simple as implementing Oracle Exadata X8M, ODA X8, or the Oracle Autonomous Database. Oracle solves the AI/ML usability problem.
The latest tech trends of 5G, Industrial IoT, edge-fog-core or cloud computing, microservices, and AI/ML have the potential to truly be transformative for IT organizations of all stripes. But they bring their own set of problems. Fortunately, for organizations of all sizes, Oracle solves those problems.