AI in Smart Cities

AI in Smart Cities

Advertisements

AI in Smart Cities is turning out to be of great help as demonstrated here in an AITHORITY article.

The image above is of Microsoft.

.

AI in Smart Cities: How Innovative Technology is Enabling Smart Cities to Meet Their Sustainability Goals

The evolution of Smart Cities has been inspiring and remarkable to watch. In the recent past, a typical resident might not have found the technical description of a smart city all that enticing, but today, citizens are more aware and more conscious. They are far more concerned about the environment and climatic changes.

Government and civic agencies across various countries, with the help of state-of-the-art artificial intelligence technology, are focusing on reducing carbon footprints, improving infrastructure, and meeting the sustainability goals of smart cities.

Did you know that according to a report by McKinsey Global Institute, ‘Smart Cities’ have the potential to refine the basic quality of life by 10-30%? It can reduce crimes, lower carbon emissions, better health management and improve traffic management and deliver an enhanced quality of life.

McKinsey Global Institute’s report stated that cities house more than half of the world’s population, and another 2.5 billion people are predicted to move there by 2050.

Today, artificial intelligence and the Internet of Things (IoT), the two concepts that have a major role to play in the development of Smart Cities, are better understood.

What are Smart Cities and Where Does Technology Come into Play?

Let’s begin by understanding the definition of a Smart City. Smart Cities are an intelligent culmination of data and digital technology. They are synonymous with intelligent economic and civic infrastructure with minimal carbon footprints.

It ensures that its citizens enjoy cutting-edge technology, utility, and mobility while eliminating bureaucratic red tape. At the end of the day, a Smart City’s ultimate goal is to improve people’s quality of life, simplify living, boost economic growth, and contribute to its long-term development.

 

But, is it enough for cities to just fall under the Smart Cities bracket and do little to meet their sustainability goals? That’d be a very unlikely situation. Smart Cities can only be successful if they are built keeping the people as well as the environment in mind.

According to Unesco,

“A smart sustainable city is an innovative city that uses ICTs (information and communication technologies) and other means to improve quality of life, the efficiency of urban operation and services, and competitiveness while ensuring that it meets the needs of present and future generations with respect to economic, social, environmental as well as cultural aspects.”

AI in Smart Cities

From more accessible, efficient services to lowering people’s overall carbon footprint, the many smart city technologies now available and on the horizon might cut expenses, increase safety, better protect the environment, and improve our quality of life.

Traffic Flow Management

Intelligent Traffic Management systems can help to alleviate traffic congestion by warning vehicles of bottlenecks and delays. Using Deep Learning algorithms, it can predict and reduce traffic, hence lowering carbon emissions. Traffic infraction detection systems and AI-enabled cameras can drastically minimize road accidents.

AI is used to evaluate real-time traffic data from cameras and IoT devices, such as vehicles like cars, buses, and trains. It recognizes patterns in data and decreases safety hazards and reoccurring accidents, as well as controlling traffic light systems.

Artificial intelligence is rapidly transforming the world around us, and smart city technology, such as parking management and traffic control systems, is one of the most effective answers it offers. With the use of artificial intelligence, one may properly forecast the flow of people, cars, and objects at various locations of interconnected transportation networks.

Smart Parking Spots

Parking has always been one of the major concerns for urban residents, and spending even five minutes looking for a parking spot can be overwhelming. Smart parking spots will allow commuters to reserve parking reservations through a mobile app, reducing the amount of time spent looking for parking spots, cutting urban traffic, lessening our carbon footprint, and conserving gasoline.

AI video analytics can detect the number of vehicles and identify parking lines, thus helping in predicting vacant parking spots. This system comes especially handy when a big public event, concert, or game is about to take place and there are high chances of congestion and struggle to park. AI can assist in identifying likely busy regions and recommending the best parking spots. It can assist drivers in avoiding traffic and saving time.

 

By now, several countries are already leveraging intelligent parking systems to help their citizen save time as well as money. The parking system first spots vacant parking and notify through an app or an indicator.  It can also assist in locating available parking spaces in congested places where traffic flow is frequently excessive.

This innovative parking solution collates data from different devices including sensors and cameras. Most of the time, these devices are embedded into the parking lots or are somewhere in proximity to instantly locate vacant spots.

Alternative Transportation

Infrastructure data is truly a blessing. It empowers smart cities as well as different modes of transportation. Today, people have the luxury to opt for alternative transportation like e-bikes, and electric vehicles. Benefit from the usage of 4G, 5G, and IoT sensors to better analyze traffic patterns, trends, and effects through AI, cutting travel time, reducing unproductive idling, and lowering total climate impact.

In electric cars, AI assists in the control of energy consumption, safety, security, and the construction of a pollution-free eco-friendly environment, which is a wish of today’s and tomorrow’s civilizations.

Recently, computer giant Acer launched e-bikes powered by advanced artificial intelligence. The bike, aimed at urban commuters, weighs only 16kg and has been calibrated for “stable and nimble riding,” according to Acer. The intelligent ebiiAssist learns from the rider’s pedaling force, riding circumstances, and chosen level of help to provide a more personalized experience.

Energy Management

Is it even possible to fathom a smart city without thinking of a smart Energy Management System (EMS)? Now the next question is, what is energy management based on? Mostly, it is based on cutting-edge climate and geospatial technology powered by AI and data analytics. They have the ability to improve our reaction to climate change as well as the overall environmental quality of smart cities.

Energy Management System is a software-based solution that assists companies and businesses in monitoring, controlling, and optimizing their energy usage. Some of the top players in the global energy management systems market are IBM Corporation, General Electric Co., Cisco Systems Inc., and Siemens AG.

Consumers and businesses are becoming more conscious of the environmental impact of their actions and are seeking for solutions to lower their carbon footprint. This is driving the use of EMS solutions as a means of reducing energy consumption and meeting sustainability goals. The growing popularity of smart homes and buildings is driving the use of EMS solutions in the building automation market.

According to Vantage Market Research, the global energy management systems market was valued at $36.4 billion in 2021 and is predicted to rise at a compound annual growth rate (CAGR) of 15.8% from 2022 to 2028.

  • Because of their flexibility, scalability, and cost-effectiveness, cloud-based EMS solutions are gaining popularity. These solutions allow for remote monitoring and control of energy consumption, making it easier for businesses to optimize their energy consumption.
  • MS solutions include the Internet of Things (IoT) and artificial intelligence (AI) technology to enhance energy efficiency and save expenses. IoT sensors can give real-time data on energy consumption, which AI algorithms can analyze and discover areas for improvement.
  • With the growing use of renewable energy sources such as solar and wind power, EMS solutions are being developed to control and optimize their utilization. Integration with smart grids and battery storage systems is required to ensure an efficient and dependable energy supply.
  • EaaS models are gaining popularity, especially in the commercial and industrial sectors. These models enable enterprises to outsource their energy management to third-party providers, who deploy EMS technologies to optimize energy consumption and save expenses.

Water Pressure and Leak Detection 

According to the American Water Works Association, the 237,600 water line breaks that occur in the United States each year cost public water utilities around $2.8 billion.

According to the American Society of Civil Engineers, aging, leaking pipes drain 7 billion gallons every day from our water systems. The World Bank estimated that non-revenue water (NRW) – the cost of water lost due to leaks, as well as standard theft and billing problems – is approaching $14 billion globally.

The World Bank estimated that non-revenue water (NRW) – the cost of water lost due to leaks, as well as standard theft and billing problems – is approaching $14 billion globally.

 

These numbers are worrisome. But, we have smart technologies to fix it. In the past decade, smart water meters have been the highlight of this evolution. Water losses in municipal water systems could be drastically reduced with the help of sensors and modern artificial intelligence (AI) technology.

  • The technique, developed by researchers at the University of Waterloo in partnership with industrial partners, can detect even minor leaks in pipelines. It uses sophisticated signal processing techniques and artificial intelligence software to detect leaks in water pipelines via sound.
  • The audio signals are captured by hydrophone sensors, which may be readily and inexpensively put in existing fire hydrants without the need for excavation or shutting them down.
  • Aside from the economic implications of losing treated water, chronic leaks can pose health risks, cause structural damage, and degrade with time.

Air Quality Prediction and Automated Actions

Air pollution has a negative impact on millions of individuals around the world and global solutions are the only way to address these global issues. Artificial intelligence is a practical technique to dealing with and reducing air pollution. AI can collect sensor and satellite data and assist academics in the blending of climate models.

Let’s take a look at how artificial intelligence-based solutions for cleaner air.

  • Artificial intelligence has the potential to improve data collecting and qualitative measurement. AI can detect patterns in data sets to aid with analysis.
  • AI can forecast future air quality and direct relevant agencies to take the necessary actions ahead of time.
  • Artificial intelligence can provide maintenance insights for sensors and other equipment.
  • AI and IoT provide recommended tools for real-time monitoring of air pollution. AI technology can swiftly and correctly identify sources of air pollution. Smart sensors, for example, can identify the source of a gas leak in a company and effectively apply corrective measures.
  • AI can aid in the reduction of air pollution in the automotive zone. AI technology allows autonomous vehicles to be fuel-efficient. AI-powered traffic signals can potentially help to reduce air pollution. We can utilize machine vision to adjust to traffic flow, reducing driving time.

AI technologies can greatly help government organizations and commercial firms by monitoring air purity levels and alerting personnel if air quality falls below a specific threshold.

  • IBM researchers are collaborating with the Beijing government to use artificial intelligence to combat air pollution and monitor environmental health. Machine learning and cognitive abilities are being used by researchers to increase forecast accuracy. AI can help predict air pollution levels 10 days in advance. Scientists are combining artificial intelligence (AI) technologies to do scenario analysis and take necessary measures such as traffic control, plant shutdowns, and more.
  • Scientists at Loughborough University in the United Kingdom have created an AI-based algorithm that predicts air quality in advance. The model examines sensor data and assists policymakers in understanding the reasons and methods for reducing air pollution.
  • CleanAir. AI is a Canadian IoT firm that provides air filtration for homes and buildings using AI-based technology. The startup employs AI and IoT to provide actionable information on indoor and outdoor air quality, deliver cleaner air, and save energy.

Final Thoughts

A smart city has a wide range of components, and each one has its effects on the quality of urban dwellers. How we live, work, and play will change as smart cities grow and become more connected. From weather monitoring and pollution management to saving energy and water and waste management, Smart Cities may be a work in progress but they are gradually becoming the epitome of urban living.

[To share your insights with us, please write to sghosh@martechseries.com]

English – here’s why it’s the lingua franca of firms around the world

Advertisements

Amongst all languages, English – here’s why it’s the lingua franca of firms around the world.  Explanations.

.


Italian government wants to stop businesses using English – here’s why it’s the lingua franca of firms around the world

The image above is of EF English Live

Rawpixel.com/Shutterstock

Natalie Victoria Wilmot, University of Bradford

The Italian government has proposed new legislation to crack down on the use of foreign languages in government, business and public life. The draft bill is particularly aimed at the use of English, which it says “demeans and mortifies” the Italian language. The proposed legislation would require employment contracts and internal regulations of overseas businesses operating in Italy to be in Italian.

Obeying such a policy would be difficult for many firms. France introduced a similar law in 1994, which has long been seen as unenforceable. Despite being in legislation for nearly 30 years, almost all multinational companies operating in France are thought to be in breach of the law.

English is indisputably the dominant language of international business and trade. Globally, more than half of all multinational companies use English in their international operations. Companies as far apart as Japan’s Rakuten, France’s Sodexo, Finland’s Nordea and Mexico’s Cemex have designated English as a “common corporate language”. This is a language chosen by the organisation to be the main vehicle for internal communications.

It’s estimated that approximately 1.5 billion people globally speak English, so its dominance in international business is not going away.

How did it come to be this way? One clue can be found in Oxfam’s recently published inclusive language guide. The charity has attracted attention for describing English as “the language of a colonising nation”. The guide notes that “the dominance of English is one of the key issues that must be addressed in order to decolonise our ways of working”.

It is impossible to deny that the reason that English has its current status is because of historical expressions of power. The colonial expansion of the British empire between the late 16th and early 20th century led to English being spoken widely across the globe. This was often at the expense of local languages which are now endangered or wiped out as a result of the imposition of English.

The cultural and economic dominance of the US since the second world war has led to the further proliferation of English. This is particularly true among younger generations who learn English in order to consume popular culture. Additionally, global interest in business school education has meant that generations of managers have been taught the latest in business ideas and concepts. Often, these originate from the US – and are in English.

Companies who use English as their corporate language often portray it as a common sense and neutral solution to linguistic diversity in business. In reality, it is anything but.

The concept of Business English as a Lingua Franca (BELF) suggests the English used in organisations can be separated from native speakers and the grammatical rules that they impose on it. It emerged in the early 2000s, as management researchers began to investigate how organisations manage language diversity in their international operations. They discovered that although English was frequently used, it was not the same English that is spoken by native speakers.

Companies all over the world use English as their main language.
Pathdoc/Shutterstock

The former CEO of Volvo, a Swedish company, once remarked that the language of his company was “bad English”. BELF encourages us to think that there is no such thing. If communication takes place successfully, and the message that you wish to transmit is understood, then you have used BELF correctly, regardless of any idiosyncrasies in grammar or spelling.

My own research has shown that although BELF can be used effectively in international environments, when native speakers of English are involved in the communication, they claim authority over how the language should be used. This can exclude those whose use of English does not meet expectations.

Why English?

Clearly, organisations need to have some form of shared language to be able to effectively communicate to manage their operations. However, research suggests that there are particular benefits associated with using English, rather than something else, as a common corporate language.

For example, studies have shown that employees find it enriching to use English at work. Due to its grammatical structure, which doesn’t distinguish between formal and informal “you” as in many other languages, employees feel that using English can reduce hierarchies and create more egalitarian workplaces.

English undoubtedly has great practical utility – but rather than understanding it as something neutral, it is important to understand the mechanisms of power and subjugation through which English arrived at its current status. Without reflection, it can easily be used as a tool to exclude, and continues to reproduce colonial mindsets about status and hierarchies. Its ongoing use, however practical, continues that domination.

Natalie Victoria Wilmot, Associate Professor in International Business, University of Bradford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

General-purpose artificial intelligence

Advertisements

General-purpose artificial intelligence written by Tambiama Madiega for the European Parliamentary Research Service is an eye-opener in the area of problem-solving human activities by machines.

The above image is © Angelov / Adobe Stock

.


General-purpose artificial intelligence

.

While there is no globally agreed definition of artificial intelligence, scientists largely share the view that technically speaking there are two broad categories of AI technologies: ‘artificial narrow intelligence’ (ANI) and ‘artificial general intelligence’ (AGI).

General-purpose artificial intelligence (AI) technologies, such as ChatGPT, are quickly transforming the way AI systems are built and deployed. While these technologies are expected to bring huge benefits in the coming years, spurring innovation in many sectors, their disruptive nature raises policy questions around privacy and intellectual property rights, liability and accountability, and concerns about their potential to spread disinformation and misinformation. EU lawmakers need to strike a delicate balance between fostering the deployment of these technologies while making sure adequate safeguards are in place.

Notion of general-purpose AI (foundation models)

While there is no globally agreed definition of artificial intelligence, scientists largely share the view that technically speaking there are two broad categories of AI technologies: ‘artificial narrow intelligence’ (ANI) and ‘artificial general intelligence’ (AGI). ANI technologies, such as image and speech recognition systems, also called weak AI, are trained on well-labelled datasets to perform specific tasks and operate within a predefined environment. By contrast, AGI technologies, also referred to as strong AI, are machines designed to perform a wide range of intelligent tasks, think abstractly and adapt to new situations. While only a few years ago AGI development seemed moderate, quick-paced technological breakthroughs, including the use of large language model (LLM) techniques have since radically changed the potential of these technologies. A new wave of AGI technologies with generative capabilities – referred to as ‘general purpose AI’ or ‘foundation models‘ – are being trained on a broad set of unlabelled data that can be used for different tasks with minimal fine-tuning. These underlying models are made accessible to downstream developers through application programming interface (API) and open-source access, and are used today as infrastructure by many companies to provide end users with downstream services.

Applications: Chat GPT and other general-purpose AI tools

In 2020, research laboratory OpenAI – which has since entered into a commercial partnership with Microsoft – released GPT-3, a language model trained on large internet datasets that is able to perform a wide range of natural language processing tasks (including language translation, summarisation and question answering). In 2021, OpenAI released DALL-E, a deep-learning model that can generate digital images from natural language descriptions. In December 2022, it launched its chatbot ChatGPT, based on GPT-3 and trained on machine learning models using internet data to generate any type of text. Launched in March 2023, GPT-4, the newest general-purpose AI tool, is expected to have even more applications in areas such as creative writing, art generation and computer coding.

General-purpose AI tools are now reaching the general public. In March 2023, Microsoft launched a new AI‑powered Bing search engine and Edge browser incorporating a chat function that brings more context to search results. It also released a GPT-4 platform allowing businesses to build their own applications (for instance for summarising long-form content and helping write software). Google and its subsidiary DeepMind are also developing general-purpose AI tools; examples include the conversational AI service, Bard. Google unveiled a range of generative AI tools in March 2023, giving businesses and governments the ability to generate text, images, code, videos, audio, and to build their own applications. Developers are using these ‘foundation models‘ to roll out and offer a flurry of new AI services to end users.

General-purpose AI tools have the potential to transform many areas, for example by creating new search engine architectures or personalised therapy bots, or assisting developers in their programming tasks. According to a Gartner study, investments in generative AI solutions are now worth over US$1.7 billion. The study predicts that in the coming years generative AI will have a strong impact on the health, manufacturing, automotive, aerospace and defence sectors, among others. Generative AI can be used in medical education and potentially in clinical decision-making or in the design of new drugs and materials. It could even become a key source of information in developing countries to address shortages of expertise.

Concerns and calls for regulation

The key characteristics identified in general-purpose AI models – their large size, opacity and potential to develop unexpected capabilities beyond those intended by their producers – raise a host of questions. Studies have documented that large language models (LLMs), such as ChatGPT, present ethical and social risks. They can discriminate unfairly and perpetuate stereotypes and social biases, use toxic language (for instance inciting hate or violence), present a risk for personal and sensitive information, provide false or misleading information, increase the efficacy of disinformation campaigns, and cause a range of human-computer interaction harms (such as leading users to overestimate the capabilities of AI and use it in unsafe ways). Despite engineers’ attempts to mitigate those risks, LLMs, such as GPT-4, still pose challenges to users’ safety and fundamental rights (for instance by producing convincing text that is subtly false, or showing increased adeptness at providing illicit advice), and can generate harmful and criminal content.

Since general-purpose AI models are trained by scraping, analysing and processing publicly available data from the internet, privacy experts stress that privacy issues arise around plagiarism, transparency, consent and lawful grounds for data processing. These models represent a challenge for education systems and for common-pool resources such as public repositories. Furthermore, the emergence of LLMs raises many questions, including as regards intellectual property rights infringement and distribution of copyrighted materials without permission. Some experts warn that AI-generated creativity could significantly disrupt the creative industries (in areas such as graphic design or music composition for instance). They are calling for incentives to bolster innovation and the commercialisation of AI-generated creativity on the one hand, and for measures to protect the value of human creativity on the other. The question of what liability regime should be used when general-purpose AI systems cause damage has also been raised. These models are also expected to have a significant impact on the labour market, including in terms of work tasks.

Against this backdrop, experts argue that there is a strong need to govern the diffusion of general-purpose AI tools, given their impact on society and the economy. They are also calling for oversight and monitoring of LLMs through evaluation and testing mechanisms, stressing the danger of allowing these tools to stay in the hands of just a few companies and governments, and highlighting the need to assess the complex dependencies between companies developing and companies deploying general-purpose AI tools. AI experts are also calling for a 6-month pause, at least, in the training of AI systems more powerful than GPT‑4.

General-purpose AI (foundation models) in the proposed EU AI act

EU lawmakers are currently engaged in protracted negotiations to define an EU regulatory framework for AI that would subject ‘high-risk’ AI systems to a set of requirements and obligations in the EU. The exact scope of a proposed artificial intelligence act (AI act) is a bone of contention. While the European Commission’s original proposal did not contain any specific provisions on general-purpose AI technologies, the Council has proposed that they should be considered. Scientists have meanwhile warned that any approach classifying AI systems as high-risk or not depending on their intended purpose would create a loophole for general purpose systems, since the future AI act would regulate the specific uses of an AI application but not its underlying foundation models.

In this context, a number of stakeholders, such as the Future of Life Institute, have called for general-purpose AI to be included in the scope of the AI act. Some academics favouring this approach have suggested modifying the proposal accordingly. Helberger and Diakopoulos propose to consider creating a separate risk category for general-purpose AI systems. These would be subject to legal obligations and requirements that fit their characteristics, and to a systemic risk monitoring system similar to the one under the Digital Services Act (DSA). Hacker, Engel and Mauer argue that the AI act should focus on specific high-risk applications of general-purpose AI and include obligations regarding transparency, risk management and non-discrimination; the DSA’s content moderation rules (for instance notice and action mechanisms, and trusted flaggers) should be expanded to cover such general-purpose AI. Küspert, Moës and Dunlop call for the general-purpose AI regulation to be made future-proof, inter alia, by addressing the complexity of the value chain, taking into account open-source strategies and adapting compliance and policy enforcement to different business models. For Engler and Renda, the act should discourage API access for general-purpose AI use in high-risk AI systems, introduce soft commitments for general-purpose AI system providers (such as a voluntary code of conduct) and clarify players’ responsibilities along value chains.

.

 

Adding Technologies to your Construction Site

Advertisements

The construction business has an enormous amount to gain from digitalization, so much so that it can be hard to know what to begin. So here are three ways to add technologies to your construction site.  Construction Pros enlightens us quite elaborately.

3 Ways to Add Technologies to Your Construction Site

The image above is ©Yuttana Studio – stock.adobe.com

The big challenge for construction professionals looking to streamline operations isn’t whether technologies can help – they can, no question. Rather, it’s where to start.

 

For one thing, construction-business leaders are sorting through a mass of available point solutions that, while purporting to solve problems in the short term, may or may not help address the foremost underlying issue that has dogged this industry since the Tower of Jericho went up: that is, construction projects involve many hands, and, too often, one hand doesn’t know what the other is doing. It’s long been a generally bitter recipe for inefficiencies, rework, delays and ballooning costs.

For another, we now face the intimidating idea of connected construction, which, as elucidated by the likes of Deloitte, seems to evoke a wholesale, holistic digitalization involving command and control, quality control, asset tracking, performance management, safety intelligence, digital twin and BIM+, workforce efficiency, energy management, and more. It’s a lot to chew on.

Deloitte itself doesn’t assume that construction firms will digitalize in one fell swoop. Among the first steps their experts suggest include asking yourself, as a company, “What use cases or business opportunities are you most interested in solving or enabling?”

Those use cases and business opportunities will differ depending on one’s line of business, market, the competitive landscape and so on. But I’m seeing three key areas in which construction firms tend to be focusing as they take steps toward the connected-construction vision that will – or at least should – materialize across the industry in the near future.

1. Collaboration Tools

Construction projects work best when teams as they’re most broadly defined – owner/operators, architects, engineers, and construction teams – work together. Rare is the project in which these teams are truly siloed. But it’s also far too rare that their collaboration involves data sharing based on real-time information. Rather, so much of what counts as construction-business collaboration happens through emailed spreadsheets and status summaries that can be outdated before the files get opened.

Effective collaboration means using a cloud-based platform that enables real-time access to constantly updated information from all corners of a project based on a particular player’s needs and security permissions. It also means establishing formal collaboration workflows among the players to delineate what the key data points for different roles are (the building owner will be interested in different views of a given pool of information than an electrical subcontractor) and how that data best be shared. Cloud-based project-management systems and collaboration tools are the vehicles to get this done.

2. Mobile Data Capture

Cloud-based data repositories may be far better than dispersed databases/spreadsheets, but the benefits of centrally stored, easily accessible data depend on the quality of that data. When it comes to construction projects, data quality – and, by extension, management’s ability to rely and make decisions based on that data – depends on inputs from teams on the ground. Those inputs will come from mobile devices into which crews provide updates either directly or indirectly based on task-related workflows embedded in those devices. Internet-of-Things (IoT) sensors are also increasingly in play, automatically feeding data to cloud-based project management systems and helping enable predictive maintenance. Either way, mobile data capture can vastly improve the volume and accuracy of the overall project’s data, and, by extension, provide the visibility for players up and down the chain to make better decisions during the course of a mass deployment or one-off build.

3. Predictive Analytics

Predictive analytics solutions use statistical models – and, increasingly, machine learning and artificial intelligence – to predict the future based on data from the past and present. In the construction context, predictive analytics is proving particularly valuable in identifying risks and assisting with forecasting. But there’s a growing universe of construction-business use cases, as McKinsey & Co. points out: from sharpening proposal bids to recognizing when a project may run into trouble. Here, too, centralized, cloud-based data sources and mobile data capture are essential precursors to predictive analytics in construction as they feed the large, up-to-date pools of data upon which predictive analytics depend.

 

The construction business has an enormous amount to gain from digitalization, so much so that it can be hard to know what to begin. Starting with cloud-based systems that enable real-time collaboration, mobile data capture, and predictive analytics establishes a foundation for enhancement and expansion into the broader vision of connection construction. Along the way, you’ll get a lot more done and save yourself some money – not to mention quite a few headaches.

.

 

Sustainable Cities and their Digital Twins

Advertisements

There is more and more belief that the key to sustainable cities may lie in increasingly sophisticated digital twins. Let us see what Anthropocene has published.

 


The key to sustainable cities may lie in increasingly sophisticated digital twins

Researchers offer the first rigorous analysis “In silico” equivalents of urban areas as a powerful tool for sustainable development
March 14, 2023

Dynamic computer models of cities known as ‘digital twins’ could help drive sustainable development across the world’s urban areas, an international team of authors argues in the journal Nature Sustainability.

Digital twins are more than just static models. They incorporate near-real-time data from sensors and other sources to produce “virtual replicas,” the authors explain—“in silico equivalents of real-world objects.”

The concept of digital twins first arose in manufacturing, and they are primarily used in product and process engineering. But the models have also been employed in fields ranging from personalized medicine to climate forecasting, at scales from the molecular to the planetary.

Many researchers have posited that digital twins will be a powerful tool for sustainability efforts. But nobody has taken a rigorous look at the benefits and pitfalls of urban digital twins. The new study takes on that task, paying particular attention to the potential for the modeling approach to help achieve the UN Sustainable Development Goals.

Digital twins have a variety of potential benefits in this realm, the researchers say. They can help cities allocate resources more efficiently—design more effective water grids, predict traffic congestion to guide transportation planning, simulate consumer behavior to recommend energy-saving measures, and so on.

In addition, “In silico models provide a virtual space where new clean technologies, which promise resource efficiency but may cause unintended harm, can be tested at a speed and scale that may otherwise be inhibited by the precautionary principle,” the researchers write. For example, they could help cities figure out how to incorporate renewable sources of energy into the grid without compromising reliability.

Digital twins could also help scientists and policymakers to collaborate across disciplines, agencies, levels of government, and geographic distances. And they could aid cities in monitoring and reporting progress on the Sustainable Development Goals or other sustainability aims.

Some of the authors of the paper have been involved in the development of a digital twin for Fishermans Bend, an urban renewal project in Melbourne, Australia. The model includes more than 1,400 layers of both historical and real-time data from public and private sources. More than 20 government agencies and municipalities are using the model to analyze how proposed buildings will affect sunlight falling on open space and vegetation, forecast tram traffic patterns, and address other planning questions.

Digital twin models are also being used in cities including Zurich, Singapore, and Shanghai to monitor noise and pollution and facilitate urban planning that takes into account population growth and climate change.

But there are pitfalls to the digital twin approach, too. Because they require so much data, advanced computing power, and technological know-how, digital twins have the potential to exacerbate digital divides, especially between high-income and lower-income countries.

What’s more, even the most complex model may fall short in representing the multifarious nature of a real-life city. The data necessary to underpin a successful digital twin may be unavailable, inaccessible, or incompatible with other sources. And the social-science aspects of digital twins are especially poorly understood.

Finally, models can be optimized for the wrong targets. There are inherent contradictions between different Sustainable Development Goals, and programmers have to take care about how outcomes and parameters are prioritized, the researchers say. For whom and by whom are these decisions made—and who’s left out of the process?

To avoid these pitfalls of digital twins—and reap the potential benefits, the researchers recommend that governments and international institutions get involved in bridging digital divides; leaving digital twin technology to the marketplace virtually guarantees that low-resource countries will be left behind.

They also call on those creating and implementing digital twins of cities to pay attention to social and ethical responsibility. “A central question that derives from these issues is: to what extent are those who may be affected by the decisions based on simulation models included in their design and deployment?” they write.

“Interestingly in such instances, digital twins themselves can raise awareness among planners and policymakers of socioeconomic inequalities, thereby becoming instruments of inclusion,” the researchers add.

Source: Tzachor A. et al. “Potential and limitations of digital twins to achieve the Sustainable Development Goals.” Nature Sustainability 2022.

Image: ©ESRI

.