世界各国のリアルタイムなデータ・インテリジェンスで皆様をお手伝い

AI Chips 2023-2033


AIチップ 2023-2033年

この調査レポートでは、AIチップの設計に関わる19のプレーヤー、設計スタートアップ企業10社、世界の最も著名な半導体メーカーについて詳細に調査・分析しています。   主な掲載内容(目次より... もっと見る

 

 

出版社 出版年月 電子版価格 ページ数 言語
IDTechEx
アイディーテックエックス
2023年5月2日 US$7,000
電子ファイル(1-5ユーザライセンス)
ライセンス・価格情報
注文方法はこちら
345 英語

※ 調査会社の事情により、予告なしに価格が変更になる場合がございます。
最新の価格はデータリソースまでご確認ください。


 

Summary

この調査レポートでは、AIチップの設計に関わる19のプレーヤー、設計スタートアップ企業10社、世界の最も著名な半導体メーカーについて詳細に調査・分析しています。
 
主な掲載内容(目次より抜粋)
  • AIハードウェア - 技術概要
  • AIチップファブリケーション - プレイヤーの能力と投資
  • サプライチェーンプレーヤー
 
Report Summary
The global AI chips market will grow to US$257.6 billion by 2033, with the three largest industry verticals at that time being IT & Telecoms, Banking, Financial Services and Insurance (BFSI), and Consumer Electronics. Artificial Intelligence is transforming the world as we know it; from the success of DeepMind over Go world champion Lee Sedol in 2016, to the robust predictive abilities of OpenAI's ChatGPT, the complexity of AI training algorithms is growing at a startlingly fast pace, where the amount of compute necessary to run newly-developed training algorithms appears to be doubling roughly every four months. In order to keep pace with this growth, hardware for AI applications is needed that is not just scalable - allowing for longevity as new algorithms are introduced, while keeping operational overheads low - but is also able to handle increasingly complex models at a point close to the end-user. A two-pronged approach, to handle AI in the cloud and at the edge, is required to fully realize an effective Internet of Things.
 
Following a period of dedicated research by expert analysts, IDTechEx has published a report that offers unique insights into the global AI chip technology landscape and corresponding markets. The report contains a comprehensive analysis of 19 players involved with AI chip design, as well as an account of 10 design start-up companies, and the most prominent semiconductor manufacturers globally. This includes a detailed assessment of technology innovations and market dynamics. The market analysis and forecasts focus on total revenue (all-inclusive, excluding multi-purpose, and excluding multi-purpose and cloud-based offerings), with granular forecasts that are disaggregated by geography (Europe, APAC, and North America), processing type (edge and cloud), chip architecture (GPU, CPU, ASIC and FPGA), packaging type (System-on-Chip, Multi-Chip Module, and 2.5D+), application (language, computer vision, predictive, and other), and industry vertical (industrial, healthcare, automotive, retail, media & advertising, BFSI, consumer electronics, IT & telecoms, and other).
 
In addition, this report contains rigorous calculations pertaining to costs of manufacture, design, assembly, test & packaging, and operation for chips at nodes from 90 nm down to 3 nm, for AI purposes. Forecasts are presented on the design costs and manufacture costs (investment per wafer) as semiconductor manufacturers move to more advanced nodes beyond 3 nm. The report presents an unbiased analysis of primary data gathered via our interviews with key players, and it builds on our expertise in the semiconductor and electronics sectors.
 
This research delivers valuable insights for:
  • Companies that require AI-capable hardware.
  • Companies that design/manufacture AI chips and/or AI-capable embedded systems.
  • Companies that supply components used in AI-capable embedded systems.
  • Companies that invest in AI and/or semiconductor design, manufacture, and packaging.
  • Companies that develop other technologies for machine learning workloads.
 
 
The rise of intelligent hardware
The notion of designing hardware to fulfil a certain function, particularly if that function is to accelerate certain types of computations by taking control of them away from the main (host) processor is not a new one; the early days of computing saw CPUs (Central Processing Units) paired with mathematical coprocessors, known as Floating-Point Units (FPUs), the purpose of which was to offload complex floating point mathematical operations from the CPU to this special-purpose chip, as the latter could handle computations in a more efficient manner, thereby freeing the CPU up to focus on other things. As markets and technology developed, so too did workloads, and so new pieces of hardware were needed to handle these workloads. A particularly noteworthy example of one of these specialized workloads is the production of computer graphics, where the accelerator in question has become something of a household name: the Graphics Processing Unit (GPU).
 
Just as computer graphics required a different type of chip architecture, so the emergence of machine learning has brought about a demand for another type of accelerator, one that is capable of efficiently handling machine learning workloads. This report details the differences between CPU, GPU and Field Programmable Gate Array (FPGA) architectures, and their relative effectiveness with handling machine learning workloads. Application-specific Integrated Circuits (ASICs) can be effectively designed to handle specific workloads, with the architectures of several of the world's leading designers of ASICs for AI being analyzed in this report. The need for chips capable of handling ML workloads will only increase as the benefits for consumers (increased functionality in consumer electronics, more accurate image classification and object detection in security cameras, and low latency, high-precision inference in autonomous vehicles, for example) is realized, which is reflected in the forecast compound annual growth rate (CAGR) of 24.4% for AI chips (including those that are used for other purposes in addition to handling ML workloads, as well as chips accessible through a cloud service) between the years 2023 and 2033.
 
Compound Annual Growth Rates for each of the three main forecasts in this report, between the years 2023 and 2033. Source: IDTechEx
 
AI is on the global agenda
AI's capabilities in natural language processing (understanding of textual data, not just from a linguistic perspective but also a contextual one), speech recognition (being able to decipher a spoken language and convert it to text in the same language, or convert to another language), recommendation (being able to send personalized adverts/suggestions to consumers based on their interactions with service items), reinforcement learning (being able to make predictions based on observations/exploration, such as is used when training agents to play a game), object detection, and image classification (being able to distinguish objects from an environment, and decide on what that object is) are so significant to the efficacy of certain products (such as autonomous vehicles and industrial robots) and to models of national governance, that the development of AI hardware and software has motivated national and regional funding initiatives across the globe. As AI-capable processors and accelerators are dependent on semiconductor manufacturers, with those capable of producing the more advanced nodes necessary for chips employed within data centres located in the Asia-Pacific region (particularly Taiwan and South Korea), the ability to manufacture AI chips is dependent on the possible supply from a select few companies (for edge devices, it is not as necessary to employ leading-edge node technology, given that these chips are typically used for low-power inference. However, the fact remains that the global supply chain is heavily indebted to a specific geographic region).
 
The risk of relying on the manufacturing capabilities of companies concentrated in a specific geographic region was realized in 2020, when a number of complementing factors (such as the COVID-19 pandemic, the rise of data mining, a Taiwanese drought, fabrication facility fire outbreaks, and neon procurement difficulties) led to a global chip shortage, where demand for semiconductor chips exceeded supply. Since then, the largest stakeholders in the semiconductor value chain (the US, the EU, South Korea, Taiwan, Japan, and China) have sought to reduce their exposure to a manufacturing deficit, should another set of circumstances arise that results in an even more exacerbated chip shortage. National and regional government initiatives have been put in place to incentivize semiconductor manufacturing companies to expand operations or build new facilities. These government initiatives are discussed in the report, where the funding is broken down and the reasons for these initiatives and what they mean for other stakeholders (such as the restrictions imposed on China by the US, and how China can build a national semiconductor supply chain around these restrictions) is detailed. In addition, the private investments announced for semiconductor manufacture since 2021 are outlined, along with current company semiconductor manufacture capabilities, particularly in relation to AI.
 
Shown here are the proposed and confirmed investments into semiconductor facilities by manufacturers since 2021. Where currencies have been listed in anything but US$, these have been converted to US$ as of publication date. Source: IDTechEx
 
The cost of progress
Machine learning is the process by which computer programs utilize data to make predictions based on a model, and then optimize the model to better fit with the data provided, by adjusting the weightings used. Computation therefore involves two steps: Training, and Inference. The first stage of implementing an AI algorithm is the training stage, where data is fed into the model and the model adjust its weights until it fits appropriately with the provided data. The second stage is the inference stage, where the trained AI algorithm is executed, and new data (that was not provided in the training stage) is classified in a manner consist with the acquired data. Of the two stages, the training stage is more computationally intense, given that this stage involves performing the same computation millions of times (the training for some leading AI algorithms can take days to complete). This then poses the question: how much does it cost to train AI algorithms?
 
In an effort to quantify this, IDTechEx has rigorously calculated the design, manufacture, assembly, test & packaging, and operational costs of AI chips from 90 nm down to 3 nm. By considering that a 3 nm chip with a given transistor density will have a smaller area than a more mature node chip with the same transistor density, the cost of deploying a leading-edge chip for a given AI algorithm can be compared with a trailing-edge chip capable of a similar performance for the same algorithm. For example, should a 3 nm chip with a given area and transistor density be used continuously for five years, the cost incurred will be 45.4X less than the cost incurred by running a 90 nm chip with the same transistor density continuously for five years, based on the model of a 3 nm chip that we employ. This includes the initial production costs of the respective chips, and can then be used to determine whether it is worthwhile to upgrade from a more mature node chip to a more advanced node chip, depending on how long the chip is to be in service for.
 
The costs associated with producing and operating a chip at each of the given nodes over the course of 5 years, based on our model of a 3 nm chip used for AI purposes. Source: IDTechEx
 
Market developments and roadmaps
IDTechEx's model of the global AI chips market considers architectural trends, developments in packaging, the dispersion/concentration of funding and investments, historical financial data, and geographically-localized ecosystems to give an accurate representation of the evolving market value over the next ten years.


ページTOPに戻る


Table of Contents

1. EXECUTIVE SUMMARY
1.1. What is an AI chip?
1.2. AI acceleration
1.3. AI chip capabilities
1.4. AI chip applications
1.5. Edge AI
1.6. Advantages and disadvantages of edge AI
1.7. The AI chip landscape - overview
1.8. The AI chip landscape - key hardware players
1.9. The AI chip landscape - hardware start-ups
1.10. The AI chip landscape - other than hardware
1.11. AI landscape - geographic split: China
1.12. AI landscape - geographic split: USA
1.13. AI landscape - geographic split: Rest of World
1.14. TSMC - the foremost AI chip manufacturer
1.15. Semiconductor foundry node roadmap
1.16. Roadmap for advanced nodes
1.17. Traditional supply chain
1.18. IDM fabrication capabilities
1.19. Foundry capabilities
1.20. Map of proposed and confirmed funding
1.21. Proposed government funding
1.22. Chip transistor density
1.23. TSMC transistor densities
1.24. Chip design costs
1.25. Summary of chip costs
1.26. Analysis: production costs vs operating costs
1.27. Analysis: cost effectiveness of nodes
1.28. Analysis: cost to create new leading node chips
1.29. Future chip design costs
1.30. Future capital investment per wafer
1.31. Capital investment for leading-edge nodes
1.32. All-inclusive AI chip market forecast
1.33. AI chip (excluding multi-purpose) market forecast
1.34. Edge vs cloud computing
1.35. Growth rates and analysis
2. FORECASTS
2.1. Leading-edge node design, manufacturing, ATP, and operational costs
2.1.1. Overview
2.1.2. Design costs
2.1.3. Operational costs
2.1.4. Fabrication costs
2.1.5. Assembly, test and packaging costs
2.1.6. Comparison and analysis
2.2. Market forecasts
2.2.1. AI chip forecast 2023 - 2033
2.2.2. Disaggregated forecasts
3. AI HARDWARE - TECHNOLOGY OVERVIEW
3.1. Introduction to AI chips
3.1.1. What is an AI chip?
3.1.2. AI acceleration
3.1.3. Why AI acceleration is needed
3.1.4. The interaction between hardware and software
3.1.5. AI chip capabilities
3.1.6. AI chip applications
3.1.7. AI in robotics
3.1.8. AI in vehicles
3.1.9. Edge AI
3.1.10. Advantages and disadvantages of edge AI
3.1.11. The AI chip landscape - overview
3.1.12. The AI chip landscape - key hardware players
3.1.13. The AI chip landscape - hardware start-ups
3.1.14. The AI chip landscape - other than hardware
3.1.15. AI landscape - geographic split: China
3.1.16. AI landscape - geographic split: USA
3.1.17. AI landscape - geographic split: Rest of World
3.1.18. TSMC - the foremost AI chip manufacturer
3.1.19. Integrated circuits explained
3.1.20. The need for specialized chips
3.1.21. AI chip basics
3.1.22. AI chip types
3.1.23. Deep neural networks
3.1.24. Training and inference
3.1.25. AI chip capabilities
3.1.26. Parallel computing
3.1.27. Low-precision computing
3.1.28. Major players
3.1.29. Emerging technologies: neuromorphic photonic architectures
3.1.30. Components of a neural network
3.1.31. Photonic processing systems
3.2. Number representation
3.2.1. Fixed-point representation
3.2.2. Floating-point representation - example
3.2.3. Floating-point representation - range
3.2.4. Floating-point representation - rounding
3.2.5. The IEEE standards
3.2.6. Denormalized numbers
3.2.7. Quantization
3.3. Transistor Technology
3.3.1. How transistors operate: p-n junctions
3.3.2. How transistors operate: electron shells
3.3.3. How transistors operate: valence electrons
3.3.4. How transistors work: back to p-n junctions
3.3.5. How transistors work: connecting a battery
3.3.6. How transistors work: PNP operation
3.3.7. How transistors work: PNP
3.3.8. How transistors switch
3.3.9. From p-n junctions to FETs
3.3.10. How FETs work
3.3.11. Moore's law
3.3.12. Gate length reductions
3.3.13. FinFET
3.3.14. GAAFET, MBCFET, RibbonFET
3.3.15. Process nodes
3.3.16. Device architecture roadmap
3.3.17. Evolution of transistor device architectures
3.3.18. Carbon nanotubes for transistors
3.3.19. CNTFET designs
3.3.20. Semiconductor foundry node roadmap
3.3.21. Roadmap for advanced nodes
3.4. GPU architecture
3.4.1. Core count
3.4.2. Memory
3.4.3. Threads
3.4.4. Nvidia and AMD - performance
3.4.5.  

ページTOPに戻る

ご注文は、お電話またはWEBから承ります。お見積もりの作成もお気軽にご相談ください。

webからのご注文・お問合せはこちらのフォームから承ります

本レポートと同じKEY WORD()の最新刊レポート

  • 本レポートと同じKEY WORDの最新刊レポートはありません。

よくあるご質問


IDTechEx社はどのような調査会社ですか?


IDTechExはセンサ技術や3D印刷、電気自動車などの先端技術・材料市場を対象に広範かつ詳細な調査を行っています。データリソースはIDTechExの調査レポートおよび委託調査(個別調査)を取り扱う日... もっと見る


調査レポートの納品までの日数はどの程度ですか?


在庫のあるものは速納となりますが、平均的には 3-4日と見て下さい。
但し、一部の調査レポートでは、発注を受けた段階で内容更新をして納品をする場合もあります。
発注をする前のお問合せをお願いします。


注文の手続きはどのようになっていますか?


1)お客様からの御問い合わせをいただきます。
2)見積書やサンプルの提示をいたします。
3)お客様指定、もしくは弊社の発注書をメール添付にて発送してください。
4)データリソース社からレポート発行元の調査会社へ納品手配します。
5) 調査会社からお客様へ納品されます。最近は、pdfにてのメール納品が大半です。


お支払方法の方法はどのようになっていますか?


納品と同時にデータリソース社よりお客様へ請求書(必要に応じて納品書も)を発送いたします。
お客様よりデータリソース社へ(通常は円払い)の御振り込みをお願いします。
請求書は、納品日の日付で発行しますので、翌月最終営業日までの当社指定口座への振込みをお願いします。振込み手数料は御社負担にてお願いします。
お客様の御支払い条件が60日以上の場合は御相談ください。
尚、初めてのお取引先や個人の場合、前払いをお願いすることもあります。ご了承のほど、お願いします。


データリソース社はどのような会社ですか?


当社は、世界各国の主要調査会社・レポート出版社と提携し、世界各国の市場調査レポートや技術動向レポートなどを日本国内の企業・公官庁及び教育研究機関に提供しております。
世界各国の「市場・技術・法規制などの」実情を調査・収集される時には、データリソース社にご相談ください。
お客様の御要望にあったデータや情報を抽出する為のレポート紹介や調査のアドバイスも致します。



詳細検索

このレポートへのお問合せ

03-3582-2531

電話お問合せもお気軽に

 

2024/12/20 10:28

158.95 円

165.20 円

201.28 円

ページTOPに戻る