On January 11, Alibaba submitted a post on Chinese social media with the title “2023 Top Ten Technology Trends Released: Exploring a More Certain Future.”
Subversive technological breakthroughs may only happen once in a century, but continuous iterative innovations happen every day.
At the beginning of 2023, the power of technology continues to reshape our world.
Generative AI is bursting, creating beautiful and emotional poems and paintings;
The digital twin city and the physical city go hand in hand and grow together;
Humans marvel at the unseen, thanks to computational optical imaging;
In the post-Moore era, Chiplet has attracted much attention…
Facing the ocean of stars and exploring a more certain future, the 2023 DAMO Academy’s top ten technological trends are released.
Alibaba | DAMO Academy
TOP TEN TECHNOLOGY TRENDS OF DAMO ACADEMY
Multi-modal pre-training large model
The pre-trained large model based on multi-modality will realize the unified knowledge representation of graphics, text and audio, and become the infrastructure of artificial intelligence.
Artificial intelligence is developing from single-modal intelligence such as text, speech, and vision to general artificial intelligence that integrates multiple modalities.
The purpose of multimodal unified modeling is to enhance the cross-modal semantic alignment ability of the model, open up the relationship between various modalities, and gradually standardize the model.
At present, the outstanding progress in technology comes from CLIP (Matching Image and Text) and BEiT-3 (General Multimodal Base Model). Based on multi-domain knowledge, building a unified, cross-scenario, multi-task multi-modal basic model has become the key development direction of artificial intelligence.
As an infrastructure in the future, large models will realize the unified knowledge representation of images, texts, and audio, and evolve towards cognitive intelligence capable of reasoning, answering questions, summarizing, and creating.
Chiplet modular design package
The interconnection standards of Chiplets will be gradually unified, and the chip R&D process will be reconstructed.
Chiplet is “deconstruction-reconstruction-reuse” at the silicon chip level. It decomposes the traditional SoC into multiple chip modules, prepares these chips separately, and then forms a complete chip through interconnection packaging. Core Chiplet can be manufactured separately using different processes, which can significantly reduce costs and enable a new form of IP multiplexing.
With the slowdown of Moore’s Law, Chiplet has become an important way to continuously improve SoC integration and computing power. Especially with the establishment of UCle Alliance in March 2022, Chiplet interconnection standards will be gradually unified, and the industrialization process will be further accelerated.
Chiplets based on advanced packaging technology may restructure the chip R&D process, from manufacturing to packaging and testing, from EDA to design, and affect the regional and industrial structure of chips in an all-round way.
Storage and calculation integration
The memory-computing integrated chip will usher in large-scale commercial use in vertical segments.
The integration of memory and calculation aims at the integration of computing units and memory storage units, and directly performs calculations while realizing data storage, so as to eliminate the overhead caused by data movement, greatly improve computing efficiency, and achieve high efficiency and energy saving in computing and storage. The integration of storage and calculation is very suitable for the computing needs of high-memory access and high-parallel AI scenarios.
Driven by industry and investment, products based on SRAM, DRAM, and Flash storage media have entered the verification period, and will give priority to low-power consumption and small computing power terminal-side computing scenarios such as smart home, wearable devices, pan-robots, and smart security.
In the future, as the memory-computing integrated chip is implemented in the cloud reasoning scene with large computing power, it may bring about changes in the computing architecture. It promotes the evolution of the traditional computing-centric architecture to a data-centric architecture, and has a positive impact on the development of industries such as cloud computing, artificial intelligence, and the Internet of Things.
The security technology is closely integrated with the cloud to create a new platform-based and intelligent security system.
Cloud-native security is the transformation of the security concept from border defense to defense-in-depth, and from plug-in mode to endogenous security. It realizes the native security of cloud infrastructure and improves security service capabilities based on cloud-native technology. Security technology and cloud computing have changed from relatively loose to closely integrated, and through the technical route of “containerized deployment” and “microservice transformation” to “serverless”, realizing the originalization, refinement, platformization and intelligence of security services.
Software and hardware integration cloud computing architecture
Cloud computing is deeply evolving to a new cloud computing architecture centered on CIPU. Through software definition and hardware acceleration, while maintaining the high flexibility and agility of cloud application development, it brings comprehensive acceleration of cloud applications.
Cloud computing has undergone a deep evolution from a CPU-centric computing architecture to a new architecture centered on the Cloud Infrastructure Processor (CIPU). Through software definition and hardware acceleration, while maintaining the high flexibility and agility of cloud application development, it brings comprehensive acceleration of cloud applications.
Under the new architecture, the integration of software and hardware brings about the integration of hardware structures, access to physical computing, storage, and network resources, and hardware acceleration through the rapid cloudification of hardware resources. In addition, the new architecture also brings about the integration of software systems. This means that the computing power resources accelerated by CIPU cloudification can be connected to the distributed platform through the controller on the CIPU to realize flexible management, scheduling and orchestration of cloud resources.
On this basis, CIPU will define the service standard of next-generation cloud computing and bring new development opportunities to the core software R&D and dedicated chip industries.
Predictable Fabric of server-side and network integration
The predictable network technology based on cloud definition is about to move from the local area application of the data center to the whole network promotion.
Predictable Fabric is a high-performance network interconnection system defined by cloud computing and coordinated by the server side and the network. The computing system and the network system are merging with each other. High-performance network interconnection enables the scale expansion of computing power clusters, thus forming a large computing power resource pool, accelerating the inclusiveness of computing power, and allowing computing power to move towards large-scale industrial applications.
It can be expected that the network not only supports emerging large computing power and high-performance computing scenarios, but also applies to general computing scenarios, which is an industry trend that integrates traditional networks and future networks. Through cloud-defined full-stack innovations in protocols, software, chips, hardware, architecture, and platforms, it can be expected that high-computing networks are expected to subvert the current technical system based on the traditional Internet TCP protocol and become a basic feature of the next-generation data center network. The local area application of the data center is promoted to the whole network.
Dual-engine intelligent decision-making
The dual-engine intelligent decision-making that integrates operational research optimization and machine learning will promote the optimization of global dynamic resource allocation.
Enterprises need to make fast and accurate business decisions in a complex and dynamically changing environment. Classical decision-making optimization is based on operations research, which constructs a mathematical model by accurately describing real problems, and at the same time combines operations research optimization algorithms to find the optimal solution of the objective function under multiple constraints.
As the complexity of the external environment and the speed of change continue to increase, the limitations of classical decision-making optimization to deal with uncertain problems well and the response speed of large-scale solutions are not fast enough are becoming increasingly prominent. Academia and industry have begun to explore the introduction of machine learning to build a new intelligent decision-making system with dual engines of mathematical models and data models to make up for each other’s limitations and improve decision-making speed and quality.
In the future, dual-engine intelligent decision-making will further expand the application scenarios, and promote global real-time dynamic resource allocation optimization in specific fields such as large-scale real-time power dispatching, port throughput optimization, airport downtime scheduling, and manufacturing process optimization.
Computational Optical Imaging
Computational optical imaging breaks through the limits of traditional optical imaging and will bring more creative and imaginative applications.
Computational optical imaging is an emerging multidisciplinary field. Based on specific application tasks, it acquires or encodes light field information (such as angle, polarization, phase, etc.) field information, breaking the limit of traditional optical imaging.
At present, computational optical imaging is in a stage of rapid development, and many exciting research results have been obtained, and large-scale applications have begun in the fields of mobile phone camera, medical treatment, and unmanned driving.
In the future, computational optical imaging is expected to further subvert traditional imaging systems and bring more creative and imaginative applications, such as lensless imaging and non-line-of-sight imaging.
Large-scale city digital twins
On the basis of large-scale trends, city digital twins continue to evolve in the direction of three-dimensional, unmanned, and globalization.
Since the city digital twin was first proposed in 2017, it has been widely promoted and recognized, and has become a new method of refined urban governance.
In recent years, the key technologies of city digital twins have achieved a breakthrough from quantity to quality, which is specifically reflected in the large-scale aspect, realizing large-scale dynamic perception mapping (lower modeling costs), large-scale online real-time rendering (shorter response time) , and large-scale co-simulation deduction (higher accuracy). At present, large-scale city digital twins have made great progress in application scenarios such as traffic management, disaster prevention and control, and dual carbon management.
On the basis of large-scale trends, the future city digital twin will continue to evolve towards three-dimensional, unmanned, and globalization.
Generative AI will usher in an explosion of applications, which will greatly promote the production and creation of digital content.
Generative AI (AIGC) is a technology that uses existing text, audio files or images to create new content.
In the past year, its technical progress mainly came from three major areas: image generation, Diffusion Model represented by DALL E-2 and Stable Diffusion; natural language processing (NLP) based on GPT-3.5 ChatGPT; Copilot based on Codex in the field of code generation. Generative AI at this stage is usually used to generate product prototypes or first drafts, and its application scenarios include graphic creation, code generation, games, advertisements, and artistic graphic design.
In the future, generative AI will become a popular basic technology, which will greatly improve the richness, creativity and production efficiency of digital content, and its application boundary will also expand to more fields with the advancement of technology and the reduction of cost.
We believe that cross-integration will become the key word of the 2023 technology trend.
In line with the original intention of returning to the essence, we strive to make scientific, objective and neutral predictions.
It is hoped that this will stimulate the thinking and resonance of scientists, entrepreneurs, engineers, and pan-technology enthusiasts, promote scientific and technological innovation with the collision of ideas, and contribute to the mutual benefit and win-win of high-level technological self-reliance and global development.
Create the future with science and technology.