免费99精品国产_精品国产高清免费_欧美日韩亚洲精品中文专区_亚洲美女视频免费爽

中國給水排水2024年城鎮(zhèn)污泥處理處置技術(shù)與應(yīng)用高級研討會(第十五屆)邀請函 (同期召開固廢滲濾液大會、工業(yè)污泥大會、高濃度難降解工業(yè)廢水處理大會)
 
當(dāng)前位置: 首頁 » 行業(yè)資訊 » 水業(yè)新聞 » 正文

TMTPost CEO: Five Major Misconceptions on China's Catchup in the AI Race AsianFin--It's crucial to a

放大字體  縮小字體 發(fā)布日期:2024-05-26  來源:TMTPost CEO: Five Major Miscon  瀏覽次數(shù):91
核心提示:TMTPost CEO: Five Major Misconceptions on China's Catchup in the AI Race AsianFin--It's crucial to assess how many years China lags behind the United States in AI, said TMTPost founder, Chairperson and CEO Zhao Hejuan, in a recent speech delivered in a c
中國給水排水2024年城鎮(zhèn)污泥處理處置技術(shù)與應(yīng)用高級研討會(第十五屆)邀請函 (同期召開固廢滲濾液大會、工業(yè)污泥大會、高濃度難降解工業(yè)廢水處理大會)

中國給水排水2024年城鎮(zhèn)污泥處理處置技術(shù)與應(yīng)用高級研討會(第十五屆)邀請函 (同期召開固廢滲濾液大會、工業(yè)污泥大會、高濃度難降解工業(yè)廢水處理大會)
 

TMTPost CEO: Five Major Misconceptions on China's Catchup in the AI Race

AsianFin--It's crucial to assess how many years China lags behind the United States in AI, said TMTPost founder, Chairperson and CEO Zhao Hejuan, in a recent speech delivered in a conference organized by Cheung Kong Graduate School of Business and Shantou University.

In the speech titled Five Misconceptions About China ’ s Catchup in the AI Race, she noted that many argue that after GPT-3 was released in 2020, and ChatGPT came out in 2022, China quickly developed models similar to GPT-3; after GPT-4 was released, it took no more than two years for China to develop a model comparable to GPT-4. However, that does not mean the gap between Chinese companies and their peers is only one to two years, said Zhao, who is an alumna of Cheung Kong Graduate School of Business.

"I find it rather misleading to use such time frames to describe the gaps because they are generational innovation timescales, not capability gaps," she added.

The following is the main content of the speech edited by TMTPost for brevity and clarity:

Dear alumni, the topic of my speech today is "Five Major Misconceptions on China's Catchup in the AI Race."

From the perspective of TMTPost, I play two roles in the field of AI, both as a researcher and reporter in the AI field, and as a participant in the application of AIGC in the content industry ’ s transformation.

TMTPost has closely followed the development of the AI field since the era of AI 1.0. In the AI 1.0 era, whether from the perspective of Chinese listed companies or applications, we seem to be catching up with the United States. However, in the AI 2.0 era, or the era of AIGC, we came to realize that China has lagged behind overnight.

I listened carefully to the remarks by each guest yesterday. One of the guests argued that China quick catchup after GPT went viral actually indicates that China followed hard on the heels of the United States in terms of strengths and capability building.

However, I'd like to offer a reality check now. I believe we might be overly optimistic in the immediate future. The optimism isn't just confined to the Chinese market; it extends to our expectations regarding the pace of the global AI application boom. I suspect that progress in the short term might not be as fast as everyone's expectations, and in the long term, there's a risk of being solely focused on immediate profitability.

For over a decade, we've been diligently covering developments in this field, closely monitoring AI-related entrepreneurship. However, we find ourselves in a somewhat stagnant position now. It's time to face the reality and strategize our way out of the "pseudo-AI entrepreneurship zone."

Let me explain in detail.

The two most talked-about things in the AI field this year are: the recent release of AlphaFold 3 and the upcoming release of GPT-5.

First, let's talk about the AlphaFold 3 model released by the Google DeepMind team on May 8, and TMTPost was the first in China to report on it and offered the most comprehensive coverage to the readership.

In 2022, the AlphaFold 2 Enhanced Edition was launched. Fast forward two years to today, and we witness the unveiling of the AlphaFold 3 model — a groundbreaking tool designed for predicting protein structures within the realm of biology. The pivotal shift in this evolution lies in the alteration of the underlying calculation methodology and model algorithm.

AlphaFold 3 integrates a combination of Transformer-based generative models and diffusion models. This fusion results in a remarkable advancement, with AlphaFold 3 boasting a prediction accuracy improvement of 100% compared to existing methods.

The prediction accuracy of AlphaFold 2 has already doubled compared to its predecessors, and now it has doubled again. Scientists have conducted comparisons, suggesting that this advancement could propel biological research forward by hundreds of millions of years and potentially save tens of trillions of dollars. This underscores the immense impact of AIGC.

However, China's research achievements in this field are relatively scarce. Today, TMTPost published a video clip of Professor Yan Ning's speech about two years ago. She remarked that accurate prediction of protein-related structures seemed unattainable with AI. Yet, today's AlphaFold 3 release seems to have effectively disproved her assessment.

The second is the upcoming release of GPT-5.

I believe the impact of this event will be as significant as the disruptive technological leap brought by AlphaFold 3, if not greater. The release of GPT-4 surpasses the shock brought by GPT-3.

Why has China been able to develop its own version of models rapidly? I attribute this primarily to open source practices. Before GPT-3, OpenAI operated on open-source principles, and even Google's Transformer paper was open source. However, it shifted to closed source after GPT-3.

This indicates a significant leap from GPT-3 to GPT-4, and the forthcoming GPT-5 is poised to achieve another substantial advancement compared to GPT-4, addressing many existing limitations.

During a meeting with OpenAI founder and CEO Sam Altman last September, he mentioned that OpenAI had been laying the groundwork for GPT-5 for some time. However, if GPT-5 merely offered incremental improvements in capabilities, it wouldn't require such extensive preparation. One fundamental change expected in GPT-5 involves segregating the inference models from the related data and potentially introducing its own search engine.

The AI advancements are remarkable. To put it pessimistically, China is far behind. To put it optimistically, China will have the capacity to catch up.

Next, I would like to explain why we assert that China must be cognizant of its status as a follower in the AI realm, refraining from overestimating its capability and instead dedicating efforts to diligent learning. It's imperative to address a pertinent reality we confront presently, thus necessitating the clarification of several misconceptions to comprehend our standing.

Misconception 1: The Gap between China and the United States in AI is Only 1 to 2 Years.

I believe it's imperative to challenge the prevalent belief that the disparity between China and the U.S. in AI amounts to merely 1 to 2 years. Is it truly such a narrow timeframe? And if so, what substantiates this claim? Many argue that China ’ s performance after the release of GPT-3 in 2020 and that of ChatGPT in 2022 demonstrates our ability to swiftly develop models akin to U.S. innovative products. With the subsequent release of GPT-4, we promptly produced a model on par with it. But does this imply that our gap is indeed only 1 to 2 years? Is this assertion accurate?

I find it somewhat disingenuous to characterize the gap using such temporal parameters, as they correspond to generational innovation cycles rather than our proficiency disparities.

Consider this: given that GPT-5 is unavailable now, we might not be able to develop a similar model in a decade. Yet, upon its release, we might require 2 to 3 years to catch up. Nevertheless, the caliber of the GPT-5 model merely represents a milestone in innovation and iteration for them, not indicative of our own capability level. This distinction is crucial, as it underscores a fundamental gap.

We must understand that this is truly a gap led by innovation, not a situation where we catch up in two years ith a single model.

Misconception 2: China is the Largest Market for AI Patents and Talent Globally.

We often assert, particularly during the AI 1.0 era, that Chinese investors and entrepreneurs making speeches in Silicon Valley would proclaim China's AI superiority over the U.S. A common metric supporting this claim is China's status as the largest market for AI patents and talent worldwide.

This patent market encompasses the volume of AI-related papers published and AI patents filed in China, both of which rank the highest globally. However, what's the reality?

Examining this chart depicting the new generation of global digital technology, we observe that the majority are AI-related papers. China undeniably holds a prominent position in terms of the quantity of AI-related papers. However, when considering the number of top-tier papers or citations, we lag behind.

In essence, while we lead globally in the quantity of papers, we fall behind in terms of top-tier papers or those with high citation rates, not only compared to the U.S. but also countries like Germany, Canada, and Britain.

Now, let's assess our engineering talent.

China indeed produces a substantial number of engineers and computer science professionals from universities. Many tech giants in Silicon Valley actively recruit computer experts from prestigious Chinese institutions such as Tsinghua and Peking University.

However, as of 2022, although China ranked approximately the second globally in terms of top-tier researchers, the number of China ’ s top-tier AI researchers is about one-fifth that of the U.S... And as of 2024, this gap may have widened even further compared to two years ago.

Therefore, the reality doesn't align with the notion that China is the world's AI talent powerhouse.

Misconception 3: The Main Obstacle for China's AI Lies in "Bottlenecked Computing Power".

The primary hurdle for Chinese AI is often identified as "bottlenecked computing power." The prevailing belief is that once we acquire relevant chips through various means, we'll reach the required level.

However, allow me to inject a dose of reality: in this phase of AI 2.0 development, computing power alone isn't sufficient. Model innovation capability and data capability are equally critical. Thus, the current reality is that not only is computing power a bottleneck, but so too are the innovation capabilities of our underlying models and our data capacity.

Let's address data capability first. Many assume that China, being a vast market with abundant consumer and corporate behavior data, must possess ample data resources. But I must be frank: much of this data is either irrelevant or inaccessible.

Earlier this year, during a conversation about meteorological data with a Chinese-American scientist who advises the Chinese Meteorological Administration, I mentioned that there are companies promoting models for meteorological calculations. The scientist bluntly stated that almost all of our meteorological data is useless due to a lack of organization, induction, and integration of historical meteorological data into computable formats.

Currently, China faces a significant deficiency in this regard. In the U.S., the most crucial aspect of the AI ecosystem is the development of the data market. However, in China, theoretically, there is no mature data market. This underscores a critical aspect of ecosystem development: the establishment of a robust data market. Without a mature data market, what meaningful calculations can be made?

Model companies in China may boast leading computing capabilities domestically, but the entire Chinese data market comprises less than 1% of the global data market. Moreover, when considering the efficacy of all data, including research and user application data, videos, or texts, the majority of mainstream global data, particularly research and user application data, is predominantly in English.

Consequently, if we cannot effectively compute with English data, how can we develop competitive large models of our own? This presents a significant challenge. That's why I emphasize that the bottleneck faced by the US isn't solely related to computing power. It encompasses the entire ecosystem, from computing power to innovation in underlying models, to data capabilities, and the establishment of a robust data market. Unfortunately, we are falling behind in all these aspects. Considering the time factor, it's extremely challenging to build up this capability adequately within ten years.

Misconception 4: Closed-Source Large Models vs. Open-Source Large Models: Which Is Better?

Recently, entrepreneurs and internet personalities have engaged in a debate regarding the superiority of closed-source versus open-source large models. However, I believe this debate is somewhat irrelevant; what truly matters is which approach is more suitable for a given context.

Both open-source and closed-source models come with their own set of advantages and disadvantages, much like the comparison between iOS ( closed-source ) and Android ( open-source ) operating systems. Each has its strengths and weaknesses. Presently, particularly in terms of performance, especially concerning large language models, where computations often involve tens of billions or even trillions of data points, closed-source models tend to exhibit significantly higher performance compared to their open-source counterparts.

For many applications or specific scenarios, the necessity for every model to be as large as tens of billions may not be crucial. Hence, open-source models remain viable to a certain extent.

For entities like OpenAI, aiming for Artificial General Intelligence ( AGI ) , closed-source models may expedite the concentration of resources and funds towards achieving the AGI goal more swiftly and efficiently.

However, for widespread application and increased iterations, open-source large models are also indispensable. Thus, we should transcend the debate over whether open-source or closed-source large models are superior. Instead, the paramount consideration should be whether we possess the capability for innovation and originality, rather than merely imitating at a basic level.

In discussions about a "hundred-model battle" or a "thousand-model battle," if each of our models harbors its own innovative elements and contributes inventive functions within its respective domains, then the quantity of models ceases to be an issue.

Indeed, in a scenario like a "hundred-model battle" or a "thousand-model battle" where innovation points are absent, and only low-level imitation and replication prevail, the necessity for numerous models diminishes. Thus, the crux of the matter lies in whether we can genuinely establish ourselves on the global stage in terms of model innovation capability. This is a matter that warrants meticulous consideration.

Misconception 5: The Explosion of AI in Major Vertical Industries Will Happen Quickly.

In China, there's often talk about an imminent explosion in vertical industries propelled by AI, with this year being touted as the inaugural year for large-scale model applications to surge. However, I've been cautioning friends that this year likely won't mark the explosion of AI in vertical industries. While it might signify the start of applications, it's not an explosion. Such transformative shifts don't occur overnight because every progression adheres to certain rules, and industrial development follows a distinct pattern.

The fundamental issue is that our overall infrastructure capability hasn't yet met the threshold for widespread industrial applications.

Consider this: even if our SORA or other applications achieve 50% efficiency, does that imply we can deploy them in 50% of applications? Not necessarily. If industrial applications demand a 90% efficiency threshold, and you're only at 50% efficiency, or even 89%, rapidand widespread application in that industry becomes unattainable.

It's important to realize that the bottleneck isn't just China's computing power; it's a global bottleneck affecting computing power worldwide, including American companies. That's why, despite OpenAI's advancements with GPT-5 and GPT-6, progress remains sluggish. At its core, large AI models rely on "brute force" – having sufficiently vast data, computing power, and energy. Without these resources, they'll hit bottlenecks, and progress will only inch forward.

Many companies may entertain the idea that since Chinese firms acknowledge their inferiority in technological innovation compared to the US, yet boast larger market sizes and stronger application capabilities, should they prioritize entrepreneurship and application development for swift success or results?

However, I believe this might hold true in the long term, but not necessarily in the short term.

Even OpenAI CEO Sam Altman stated that 95% of startup companies rely on large models for development, but each major iteration of large models replaces a cohort of startup firms.

AI doesn't operate outside the realm of general business laws. So, even if AI is deployed, it won't automatically supplant existing products until foundational capabilities have reached a certain threshold.

This concern was also echoed by the founder of Pika during our conversation earlier this year. When I asked if she considered Runway as Pika's primary competitor, she pointed to OpenAI as her main concern because of their inevitable development of multimodal technology. So, I believe that until foundational capabilities reach a certain level, newly developed AI applications won't necessarily displace existing ones.

Since the fundamental infrastructure capabilities haven't reached the stage of industry transformation, we can't herald a "booming" new era of AI.

Despite claims that China's mobile internet applications are global frontrunners, our current historical juncture doesn't align with the internet era or the explosion phase of mobile internet applications. Instead, we're in the current stage of AI development, akin to the early phase of Cisco, rather than the post-internet development stage.

Today's NVIDIA is like the Cisco of the past, when Cisco dominated the US market and its stock price rose 60 times in a year. At that time, were there any noteworthy internet companies? Many of today's internet companies might not have appeared back then. Later, with the improvement of basic infrastructure capabilities, the development of communication technology from 2G to 4G, the improvement of network technology, and the emergence of mobile internet and long and short video applications.

The current state of AI applications is primarily focused on enhancing industrial efficiency, but achieving a complete transformation of industries will require considerable time and patience.

This is why we refer to it as weak artificial intelligence, and China's advantage in its vast market cannot be fully leveraged at present. In the short term, the primary focus remains on content generation-related auxiliary tools, such as search, question answering, text and image processing, and text-to-audio/video conversion.

So, how should we navigate this landscape?

I believe it's imperative to establish a social consensus regarding our actions in the global arena and during the course of AI development.

First and foremost, we must prioritize enhancing fundamental innovation and fostering long-term capacity building.

This involves building a robust ecosystem, beginning with education. Initiatives such as establishing AI education programs, evaluating university education systems, and implementing frameworks for academic openness and collaboration should revolve around fostering innovative technological capabilities in AI. Additionally, we must enhance the foundational innovation capacities required for large model development. Without this groundwork, all other efforts would be akin to "water without a source."

Second, we must adopt a patient approach to navigate the AI explosion cycle across various industrial application scenarios. Every industry transformed by AI undergoes a cyclical process starting from changes in underlying technology, and this transformation won't occur overnight or in a single leap.

I firmly believe that each industry potentially influenced by AI will experience a bottom-up transformation and initiate a new cycle for the industry. It's not about immediate changes at the application layer. This principle applies to sectors such as media, robotics, manufacturing, biopharmaceuticals, and more. While they will all undergo disruptive effects, the ability of our fundamental research capabilities to keep pace becomes paramount.

Every industry begins its journey with foundational capabilities and infrastructure construction from ground zero, constituting the real industrial cycle.

Thirdly, we need to adopt a more open mindset to embrace the competition and challenges presented by global AI development without limiting ourselves.

While some may argue that Americans are holding us back, I believe it's essential that we don't hinder our own progress. This is why I advocate against engaging in low-level imitative competition. Instead, we should consider taking a more proactive approach in AI innovation, even if it means taking a break in AI governance, norms, and ethical frameworks, and embracing a more open attitude towards advancement.

I sincerely hope that our advancements in AI research won't follow the same trajectory as the beaten path of new energy vehicles. While there were innovations in new energy vehicles a decade ago, such as in intelligent experiences and battery technology, today, including Xiaomi's entry, we find ourselves stuck in low-level, repetitive pursuits that hinder our ability to progress.

So, I hope our basic research capabilities and innovation capabilities can progress faster, and we should maintain patience in our endeavors.

Lastly, I'd like to recommend TMTPost's new product, AGI. TMTPost has been a significant contributor and participant in the AI field, and AGI is its latest information offering. AGI primarily focuses on cutting-edge AI information, aggregating global AI technology trends. Through various content formats centered around in-depth analysis, it explores industry trends, technological innovations, and business applications, delivering the latest and most relevant AI insights to enterprises and users. AGI aims to present a comprehensive and dynamic view of the AI landscape.

 
微信掃一掃關(guān)注中國水業(yè)網(wǎng)/>
</div>
<div   id= 
 
[ 行業(yè)資訊搜索 ]  [ ]  [ 打印本文 ]  [ 關(guān)閉窗口 ]

 
0條 [查看全部]  相關(guān)評論

 
推薦圖文
Water & Ecology Forum: 水與生態(tài)新起點 直播時間:2024年5月24日(周三)14:30 2024-05-24 14:30:00 開始 中國水環(huán)境治理存在的問題及發(fā)展方向 直播時間:2024年5月28日(星期二)14:00—16:00 2024-05-28 14:00:00 開始
5月22日下午丨《城鎮(zhèn)排水管網(wǎng)系統(tǒng)診斷技術(shù)規(guī)程》宣貫會 直播時間:2024年5月22日(周三)14:00-16:00 2024-05-22 14:00:00 開始 雙碳背景下污泥處置資源化路徑探索--杜炯  教授級高級工程師,上海市政工程設(shè)計研究總院(集團)有限公司第四設(shè)計院總工程師,注冊公用設(shè)備工程師、注冊咨詢工程師(投資),上海土木工程學(xué)會會員、復(fù)旦大學(xué)資源
JWPE 網(wǎng)絡(luò)報告/用于快速現(xiàn)場廢水監(jiān)測的折紙微流體裝置 直播時間:2024年5月13日(星期一)19:00 2024-05-13 19:00:00 -楊竹根  英國克蘭菲爾德大學(xué)教授、高級傳感器實驗 紫外光原位固化法管道修復(fù)全產(chǎn)業(yè)鏈質(zhì)量控制倡議 直播時間:2024年5月7日(星期二)9:00-16:30 2024-05-07 09:00:00 開始
華北院 馬洪濤 副總工:系統(tǒng)化全域推進海綿城市建設(shè)的應(yīng)與不應(yīng)——海綿城市建設(shè)正反案例1 直播時間:2024年4月30日(周二)9:30 2024-04-30 09:30:00 開始 高效納濾膜:中空纖維納濾膜的特點與應(yīng)用 直播時間:2024年4月27日(周六)10:00-11:00 2024-04-27 10:00:00 開始-先進水技術(shù)博覽(Part 14)
聚力水務(wù)科技創(chuàng)新、中德研討推進行業(yè)高質(zhì)量發(fā)展 ——特邀德國亞琛工業(yè)大學(xué)Max Dohman 直播時間:2024年4月14日(周日)15:00 2024-04-14 15:00:00 開始 康碧熱水解高級厭氧消化的全球經(jīng)驗和展望 | 北京排水集團高安屯再生水廠低碳運營實踐與探索 直播時間:2024年4月10日(周三)14:00—16:00 2024-04-10 14:00:00 開始
世界水日,與未來新水務(wù)在深圳約一個高峰論壇 直播時間:2024年3月22日(周五)08:30—17:30 2024-03-22 08:30:00 開始 中國給水排水直播:直播時間:2024年3月14日(周四)14:00 2024-03-14 14:00:00 開始    題目:占地受限情況下的污水廠水質(zhì)提升解決方案 主講人:程忠紅, 蘇伊士亞洲 高級
華北設(shè)計院:高密度建成區(qū)黑臭水體整治效果鞏固提升要點分析 直播時間:2024年3月4日(周一)9:30 2024-03-04 09:30:00 開始 2月23日|2024年“云學(xué)堂科技學(xué)習(xí)周”暨第一屆粵港澳大灣區(qū)青年設(shè)計師技術(shù)交流與分享論壇 直播時間:2024年2月23日(星期五)9:00—17:00 2024-02-23 09:00:00 開始
2月22日|2024年“云學(xué)堂科技學(xué)習(xí)周”暨第一屆粵港澳大灣區(qū)青年設(shè)計師技術(shù)交流與分享論壇 直播時間:2024年2月22日(星期四)9:00—18:00 2024-02-22 09:00:00 開始 2月21日|2024年“云學(xué)堂科技學(xué)習(xí)周”暨第一屆粵港澳大灣區(qū)青年設(shè)計師技術(shù)交流與分享論壇 直播時間:2024年2月21日(星期三)9:00—18:00 2024-02-21 09:00:00 開始
大灣區(qū)青年設(shè)計師論壇直播預(yù)告(第一屆粵港澳大灣區(qū)青年設(shè)計師技術(shù)交流論壇)  “醒年盹、學(xué)好習(xí)、開新篇”2024年“云學(xué)堂科技學(xué)習(xí)周”暨第一屆粵港澳大灣區(qū)青年設(shè)計師技術(shù)交流與分享論壇 山東日照:“鄉(xiāng)村之腎”監(jiān)管裝上“智慧芯”    日照市生態(tài)環(huán)境局農(nóng)村辦負責(zé)人時培石介紹,農(nóng)村生活污水處理系統(tǒng)被稱為“鄉(xiāng)村之腎”,對于農(nóng)村水環(huán)境的改善發(fā)揮著重要作用
人工濕地國際大咖/西安理工大學(xué)趙亞乾教授:基于人工濕地技術(shù)的污水凈化之路 直播時間:2024年1月30日(星期二)19:00 2024-01-30 19:00:00 開始 馬洪濤院長:城市黑臭水體治理與污水收集處理提質(zhì)增效統(tǒng)籌推進的一些思考 直播時間:2024年1月25日 10:00 2024-01-25 10:00:00 開始
2024年水務(wù)春晚 直播時間:2024年1月18日(周四)18:00—22:00 2024-01-18 18:00:00 開始 《以物聯(lián)網(wǎng)技術(shù)打造新型排水基礎(chǔ)設(shè)施》 直播時間:2024年1月11日(星期四)15:00 2024-01-11 15:00:00 開始--劉樹模,湖南清源華建環(huán)境科技有限公司董事長,清華大學(xué)碩士研究生
WPE網(wǎng)絡(luò)報告:作者-審稿-編輯視野下的高水平論文 直播時間:2024年1月10日(星期三)19:00 2024-01-10 19:00:00 開始 核心期刊:中國給水排水》繼續(xù)入編北大《中文核心期刊要目總覽》 中國給水排水核心科技期刊
直播丨《城鎮(zhèn)供水管網(wǎng)漏損控制及評定標(biāo)準(zhǔn)》宣貫會 直播時間:2023年12月27日 09:30—11:00 2023-12-27 12:00:00 開始 【直播】【第五屆水利學(xué)科發(fā)展前沿學(xué)術(shù)研討會】王浩院士:從流域視角看城市洪澇治理與海綿城市建設(shè)
先進水技術(shù)博覽(Part 13)|水回用安全保障的高效監(jiān)測技術(shù) 中國城鎮(zhèn)供水排水協(xié)會城鎮(zhèn)水環(huán)境專業(yè)委員會2023年年會暨換屆大會 直播時間:2023年12月16日(周六)08:30—18:00 2023-12-16 08:30:00 開始
第二屆歐洲華人生態(tài)與環(huán)境青年學(xué)者論壇-水環(huán)境專題 直播時間:2023年12月9日(周六)16:00—24:00 2023-12-09 16:00:00 開始 JWPE網(wǎng)絡(luò)報告:綜述論文寫作的一點體會 直播時間:2023年11月30日(星期四)19:00 2023-11-30 19:00:00 開始
WaterInsight第9期丨強志民研究員:紫外線水消毒技術(shù) 再生水 水域生態(tài)學(xué)高端論壇(2023)熱帶亞熱帶水生態(tài)工程教育部工程研究中心技術(shù)委員會會議 直播時間:2023年11月29日(周三) 09:00—17:40 2023-11-29 09:00:00 開始
中國給水排水直播:智慧水務(wù)與科技創(chuàng)新高峰論壇 直播時間:2023年11月25日(周六) 13:30 2023-11-25 13:30:00 開始 中國水協(xié)團體標(biāo)準(zhǔn)《城鎮(zhèn)污水資源與能源回收利用技術(shù)規(guī)程》宣貫會通知 中國城鎮(zhèn)供水排水協(xié)會
2023年11月14日9:00線上舉行直播/JWPE網(wǎng)絡(luò)報告:提高飲用水安全性:應(yīng)對新的影響并識別重要的毒性因素 直播主題:“對癥下藥”解決工業(yè)園區(qū)污水處理難題   報告人:陳智  蘇伊士亞洲 技術(shù)推廣經(jīng)理 直播時間:2023年11月2日(周四)14:00—16:00 2023-11-02 14:00:00 開始
10月29日·上海|市政環(huán)境治理與水環(huán)境可持續(xù)發(fā)展論壇 BEST第十五期|徐祖信 院士 :長江水環(huán)境治理關(guān)鍵      直播時間:2023年10月26日(周四)20:00—22:00 2023-10-26 20:00:00 開始
《水工藝工程雜志》系列網(wǎng)絡(luò)報告|學(xué)術(shù)論文寫作之我見 直播時間:2023年10月19日(周四)19:00 2023-10-19 19:00:00 開始 污水處理廠污泥減量技術(shù)研討會 直播時間:2023年10月20日13:30-17:30 2023-10-20 13:30:00 開始
技術(shù)沙龍 | 先進水技術(shù)博覽(Part 12) 直播時間:10月14日(周六)上午10:00-12:00 2023-10-14 10:00:00 開始 直播題目:蘇伊士污泥焚燒及零碳足跡概念污泥廠 主講人:程忠紅 蘇伊士亞洲 技術(shù)推廣經(jīng)理  內(nèi)容包括: 1.	SUEZ污泥業(yè)務(wù)產(chǎn)品介紹 2.	全球不同焚燒項目介紹 3.	上海浦東污泥焚燒項目及運營情況
中國給水排水第十四屆中國污泥千人大會參觀項目之一:上海浦東新區(qū)污水廠污泥處理處置工程 《水工藝工程雜志》系列網(wǎng)絡(luò)報告 直播時間:2023年9月26日 16:00  王曉昌  愛思唯爾期刊《水工藝工程雜志》(Journal of Water Process Engineering)共同主
中國給水排水2024年污水處理廠提標(biāo)改造(污水處理提質(zhì)增效)高級研討會(第八屆)邀請函暨征稿啟事  同期召開中國給水排水2024年排水管網(wǎng)大會  (水環(huán)境綜合治理)  同期召開中國給水排水 2024年 海綿城市標(biāo)準(zhǔn)化產(chǎn)業(yè)化建設(shè)的關(guān)鍵內(nèi)容 結(jié)合項目案例,詳細介紹海綿城市建設(shè)的目標(biāo)、技術(shù)體系及標(biāo)準(zhǔn)體系,探討關(guān)鍵技術(shù)標(biāo)準(zhǔn)化產(chǎn)業(yè)化建設(shè)的路徑,提出我國海綿城市建設(shè)的發(fā)展方向。
報告題目:《城鎮(zhèn)智慧水務(wù)技術(shù)指南》   中國給水排水直播平臺: 主講人簡介:  簡德武,教授級高級工程師,現(xiàn)任中國市政工程中南設(shè)計研究總院黨委委員、副院長,總院技術(shù)委員會副主任委員、信息技術(shù)委員會副主 第一輪通知 | 國際水協(xié)第18屆可持續(xù)污泥技術(shù)與管理會議 主辦單位:國際水協(xié),中國科學(xué)院  聯(lián)合主辦單位:《中國給水排水》雜志社 等
技術(shù)沙龍 | 先進水技術(shù)博覽(Part 11) 直播時間:8月19日(周六)上午10:00-12:00 2023-08-19 10:00:00  廣東匯祥環(huán)境科技有限公司  湛蛟  技術(shù)總監(jiān)  天津萬 中國水業(yè)院士論壇-中國給水排水直播平臺(微信公眾號cnww1985):自然—社會水循環(huán)與水安全學(xué)術(shù)研討會
WaterInsight第7期丨掀浪:高鐵酸鉀氧化技術(shù)的機理新認知及應(yīng)用 直播時間:2023年8月5日(周六)上午10:00-11:00 2023-08-05 10:00:00 開始 直播:“一泓清水入黃河”之山西省再生水產(chǎn)業(yè)化發(fā)展專題講座 直播時間:2023年7月23日(周日 )08:00-12:00 2023-07-23 08:00:00 開始
珊氮自養(yǎng)反硝化深度脫氮技術(shù)推介會 直播時間:2023年7月21日(周五) 歐仁環(huán)境顛覆性技術(shù):污水廠擴容“加速跑”(原有設(shè)施不動,污水處理規(guī)模擴容1倍!出水水質(zhì)達地表水準(zhǔn)IV類標(biāo)準(zhǔn)!),推動污水治理提質(zhì)增效。  誠征全國各地污水廠提標(biāo)擴容工程需求方(水務(wù)集團、BOT公司、設(shè)
直播預(yù)告|JWPE網(wǎng)絡(luò)報告:自然系統(tǒng)中難降解污染物去除的物化與生化作用及水回用安全保障 中國給水排水 直播題目: 高排放標(biāo)準(zhǔn)下污水中難降解COD的去除技術(shù)     報告人:蘇伊士亞洲 技術(shù)推廣經(jīng)理 程忠紅
WaterTalk|王凱軍:未來新水務(wù) 一起向未來  For and Beyond Water 中國環(huán)境科學(xué)學(xué)會水處理與回用專業(yè)委員會以網(wǎng)絡(luò)會議形式舉辦“水與發(fā)展縱論”(WaterTalk)系列學(xué)術(shù)報 5月18日下午 14:00—16:00 直播  題目: 高密度沉淀池技術(shù)的迭代更新 主講人: 程忠紅 蘇伊士亞洲 技術(shù)推廣經(jīng)理  大綱:  高密池技術(shù)原理 不同型號高密池的差異和應(yīng)用區(qū)別 高密池與其他
BEST|綠色低碳科技前沿與創(chuàng)新發(fā)展--中國工程院院士高翔教授  直播時間:2023年4月30日 14:00—16:00 2023-04-30 14:00:00 開始 日照:“碳”尋鄉(xiāng)村振興“綠色密碼”  鳳凰網(wǎng)山東    鄉(xiāng)村生態(tài)宜居,鄉(xiāng)村振興的底色才會更亮。我市堅持鄉(xiāng)村建設(shè)與后續(xù)管護并重,市、區(qū)、鎮(zhèn)聯(lián)
BEST論壇講座報告第十三期(cnwww1985):全球碳預(yù)算和未來全球碳循環(huán)的不穩(wěn)定性風(fēng)險 The global carbon budget and risks of futur 國際水協(xié)IWA 3月17日直播:3月17日 國際水協(xié)IWA創(chuàng)新項目獎PIA獲獎項目介紹分享會 直播時間:2023年3月17日 9:00—11:30 2023-03-17 09:00:00 開始
中國給水排水直播:云中漫步-融合大數(shù)據(jù)、人工智能及云計算的威立雅智慧水務(wù)系統(tǒng)Hubgrade 直播時間:2023年3月15日 中國給水排水直播平臺會議通知 | 2023污泥處理處置技術(shù)與應(yīng)用高峰論壇(清華大學(xué)王凱軍教授團隊等)
中國污水千人大會參觀項目之一: 云南合續(xù)環(huán)境科技股份有限公司  ?谑形鞅捞端|(zhì)凈化中心 中國給水排水 Water Insight直播:劉銳平  清華大學(xué) 環(huán)境學(xué)院 教授 博士生導(dǎo)師—高濃度硝酸鹽廢水反硝化脫氮過程強化原理與應(yīng)用 會議時間:2023.1.7(周六)10:00—11:00
智慧水務(wù)的工程全生命周期實踐分享 直播時間:2023年1月6日 15:00-16:00 對話嘉賓:竇秋萍  華霖富水利環(huán)境技術(shù)咨詢(上海)有限公司  總經(jīng)理 主持人:李德橋   歐特克軟件(中國)有限 蘇伊士 直播時間:12月30日14:00-16:00直播題目:污泥處理處置的“因地制宜和因泥制宜” 主講人:程忠紅,蘇伊士亞洲  技術(shù)推廣經(jīng)理 特邀嘉賓:劉波 中國市政工程西南設(shè)計研究總院二院總工 教
蘇伊士 直播時間:12月27日14:00-16:00;復(fù)雜原水水質(zhì)下的飲用水解決方案    陳智,蘇伊士亞洲,技術(shù)推廣經(jīng)理,畢業(yè)于香港科技大學(xué)土木與環(huán)境工程系,熟悉市政及工業(yè)的給水及污水處理,對蘇伊士 曲久輝  中國工程院院士,美國國家工程院外籍院士,發(fā)展中國家科學(xué)院院士;清華大學(xué)環(huán)境學(xué)院特聘教授、博士生導(dǎo)師;中國科學(xué)院生態(tài)環(huán)境研究中心研究員
基于模擬仿真的污水處理廠數(shù)字化與智慧化:現(xiàn)狀與未來 直播時間:2022年12月28日(周三)9:30—12:00 2022城鎮(zhèn)溢流污染控制高峰論壇|聚焦雨季溢流污染控制的技術(shù)應(yīng)用與推廣 中國給水排水
王愛杰 哈爾濱工業(yè)大學(xué)教授,國家杰青,長江學(xué)者,國家 領(lǐng)軍人才:廣州大學(xué)學(xué)術(shù)講座|低碳水質(zhì)凈化技術(shù)及實踐 直播時間:2022年12月18日 9:30 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議 直播時間:2022年12月15日 20:00—22:00
德國專場直播主題:2022 中國沼氣學(xué)術(shù)年會暨中德沼氣合作論壇 2022 中國沼氣學(xué)術(shù)年會暨中德沼氣合作論壇德國專場 時間:2022年12月20日  下午 15:00—17:00(北京時間) 2022中國沼氣學(xué)會學(xué)術(shù)年會暨第十二屆中德沼氣合作論壇的主論壇將于12月15日下午2點召開
技術(shù)交流 | 德國污水處理廠 計算系列規(guī)程使用介紹 城建水業(yè) WaterInsight首期丨王志偉教授:膜法水處理技術(shù)面臨的機遇與挑戰(zhàn) 直播時間:2022年12月10日 10:00—11:00
處理工藝專場|水業(yè)大講堂之六——城市供水直飲安全和智慧提質(zhì) 直播時間:2022年12月8日 8:30—12:15 建設(shè)管理專場|水業(yè)大講堂之六——城市供水直飲安全和智慧提質(zhì) 直播時間:2022年12月7日 14:00—17:15
國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議 直播時間:2022年12月8日 20:00—22:00 Training Course for Advanced Research & Development of Constructed Wetland Wastewater Treatment Tech
12月3日|2022IWA中國漏損控制高峰論壇 直播時間:2022年12月3日(周六)9:00—17:00 2022-12-03 09:00:00 開始 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議(第八期) 直播時間:2022年12月1日 20:00—22:00 2022-12-01 20:00:00 開始
中國給水排水直播:智慧輸配專場|水業(yè)大講堂之六——城市供水直飲安全和智慧提質(zhì) 直播時間:2022年11月30日 14:00—17:05 2022-11-30 14:00:00 開始 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議(第七期) 直播時間:2022年11月25日 20:00—22:00 2022-11-25 20:00:00 開始
國標(biāo)圖集22HM001-1《海綿城市建設(shè)設(shè)計示例(一)》首次宣貫會   直播時間:2022年11月24日 13:30—17:30 中國給水排水直播平臺 【 李玉友,日本國立東北大學(xué)工學(xué)院土木與環(huán)境工程系教授,博導(dǎo),注冊工程師】顆粒污泥工藝的研究和應(yīng)用:從UASB到新型高效脫氮和磷回收
中國建科成立70周年|市政基礎(chǔ)設(shè)施綠色低碳發(fā)展高峰論壇   直播時間:2022年11月22日 13:30—18:25   2022-11-22 13:30:00 開始 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議(第六期)   直播時間:2022年11月22日 20:00—22:00
會議預(yù)告| 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議(第五期) 中國給水排水 奮進七十載 起航新征程|中國市政華北院第十屆科技工作會議暨慶祝建院七十周年大會  直播時間:2022年11月18日 9:30   2022-11-18 09:00:00 開始
樊明遠:中國城市水業(yè)的效率和服務(wù)要做一個規(guī)范     樊明遠 世界銀行高級工程師 黃綿松  北京首創(chuàng)生態(tài)環(huán)保集團股份有限公司智慧環(huán)保事業(yè)部總經(jīng)理,正高級工程師  獲清華大學(xué)博士學(xué)位:海綿城市系統(tǒng)化運維的挑戰(zhàn)與實踐  直播時間:2022年11月16日 18:30  黃綿松  北京
全國節(jié)水高新技術(shù)成果展云端活動周尋水路  污水回用專場      轉(zhuǎn)發(fā)直播贈送  中國給水排水電子期刊  。!  直播抽獎 100份 中國給水排水電子期刊  。! 首屆全國節(jié)水高新技術(shù)成果展即將開幕,同步舉行的節(jié)水時光云端活動周”也將于2022年11月15日10:00-12:00 、14:30-17:00,在云端與水務(wù)行業(yè)的專家朋友見面!    在這即將到來激動
會議預(yù)告| 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議(第四期) 中國給水排水 國標(biāo)圖集22HM001-1《海綿城市建設(shè)設(shè)計示例(一)》首次宣貫會
國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議 直播時間:2022年11月3日 16:00—18:00 2022-11-03 16:00:00 開始 中國給水排水直播 會議預(yù)告 | 國際水協(xié)會哥本哈根世界水大會成果分享系列網(wǎng)絡(luò)會議 國合環(huán)境
推薦行業(yè)資訊
點擊排行