Universities must take “heroic action” to address the sustainability crisis after helping to lay its foundations by failing to take action sooner, Arizona State University president Michael Crow has claimed.
He said the sustainability emergency – which the GSDC is meeting to discuss urgent solutions for – was caused by the relationship between the built environment and the natural systems on which we are dependent.
Professor Crow told delegates at the event in Saudi Arabia that sustainability was “critical to our success as a species”.
“We in academia have contributed mightily to the designed environment, and hold much of the responsibility for the lack of sustainability of that built environment and its increasing disruption of the natural environment,” he said.
Professor Crow warned that the world was entering an “unbelievably challenging moment where everything is accelerating”, and that there were many things higher education could have done already but had not.
The sector’s inability to be “more conscious of what we’re doing and how we’re doing it” helped lay the foundations for the sustainability crisis today, he said.
“A lot of groups have been responsible for our lack of sustainability, but at the heart of all of them has been the academy, [and] the universities,” he added.
“It’s time for universities to really step up for heroic action in the way that universities did around some other issues in the past.
“It’s time for new types of knowledge to be produced, new ways of thinking.”
He called for universities to broaden the way they organised themselves because working in isolation would “not get us there quickly enough”.
The summit, held in the Middle East for the first time, is aiming to challenge the usual thinking on what higher education, with the support of governments, businesses and society, must do to help society meet the United Nations’ Sustainable Development Goals (SDGs).
“It’s time for us to mount up, to begin working together, to begin aligning together, to begin working across institutions and across the world to take on this notion of global sustainable development,” added Professor Crow.
Also speaking at the summit, Tony Chan, president of KAUST, said the world was in a state of crisis that imperiled all of humanity, and universities across the globe should act with resolve.
“Our required response to the present crisis must be of a scale and sense of urgency akin to how we must respond to major world wars,” he said.
“Our universities must cease to be exemplars of unsustainable practices and we must become the transformative enablers of sustainability for others.”
Those outside higher education took their cue not just from what universities preached, but from what they practised, said Dr Chan.
“If academics are serious about tackling sustainability challenges, we can’t wait for the cavalry to show up,” he said. “We are the cavalry.”
In the process of Transforming education, the author wonders in this UNESCO article; How can technology and youth drive change? Knowing that Technology can enhance the learning experience, address educational challenges, and prepare learners for future jobs.
Transforming education: How can technology and youth drive change?
As the world reaches a critical point between the Transforming Education Summit and the SDG Summit scheduled to take place in September 2023, there is an urgent need for actions to break down the barriers that keep 244 million young people out of school. This blog announces a new partnership with Restless Development and the GEM Report. Together we aim to mobilize youth globally to inform the development of the 2023 Youth Report on technology and education, exploring how technology can address various education challenges, including issues of access, equity and inclusion, quality and system management.
Education online – a case in point
After the disruption caused by the COVID-19 pandemic, the education sector is still in recovery. The pandemic had a profound impact youth, with the most vulnerable learners being hit the hardest. The global shift to distance and online learning resulted in many less privileged communities losing their means of connection to education, and some of the gains made towards the goals of the Education 2030 agenda were lost. As a result, the 2023 GEM Report on technology and education due out July 26 in Montevideo comes at a critical moment to reflect on how to accelerate progress towards SDG 4.
Technology can enhance the learning experience, address educational challenges, and prepare learners for the jobs of the future. STEM education, in particular, is essential for promoting innovation and economic growth and equipping learners with the skills they need to succeed in the current technology-driven world. But it also raises concerns over privacy, data protection and sustainability.
The 2023 GEM Report will investigate the ongoing debates around technology and education. It will explore how technology addresses issues of access, equity and inclusion, quality and system management. It will also acknowledge that some of the proposed solutions may have negative consequences.
In this fast-changing world, technology is crucial in providing learners with access to a wide range of resources and information. With technology, learners can access educational materials from anywhere at any time, collaborate with peers, and engage in interactive learning activities that promote critical thinking and problem-solving skills.
However, the COVID-19 pandemic has exposed concerns about the inequality in technology accessibility. In many parts of the world, young learners are not prepared for their future due to a lack of digital access in formal teaching and outdated curricula that don’t accommodate technology. To create a more inclusive, creative, and future-ready approach to learning, education systems must be transformed, which requires scaling up access to digital skills and decent infrastructure to ensure that no one is left behind.
A new partnership with Restless Development to mobilize youth globally will inform the 2023 Youth Report
We are pleased to announce the new partnership between Restless Developmentand the GEM Report to mobilize youth globally to reflect upon, question and debate the recommendations of the 2023 Global Education Monitoring (GEM) Report and inform the development of its youth edition. Building on the consultation findings with youth in the run up to the RewirED Forum in 2021 on technology, Restless Development will lead a series of youth-led regional consultations aiming to better understand the challenges and opportunities young people from around the world face when using technology in education and to hear their recommendations for policymakers.
The global consultation process will be officially launched on 26 April 2023 during a side-event at the ECOSOC Youth Forum in New York where youth activists and representatives will gather to discuss the themes that should be covered in the Youth Report. This is the first time that youth is involved in such early stages of the development of the report. Their views on the framing of recommendations for their region will be detailed and produced in the youth version of the 2023 GEM Report next to views from other regions and relating to the recommendations contained in the global GEM Report.
This first global consultation event will trigger a series of activities:
A global survey on the key issues that the Youth Report should address: Youth and student organizations will be able to choose from a series of themes linked to the recommendations of the global report: equity and inclusion, appropriateness, sustainability, and privacy among others.
A call for expressions of interest for youth organizations from around the world to organize regional and thematic consultations to inform the development of the Youth Report and take part in associated advocacy activities.
An online consultation to collect thoughts from youth from around the world on the themes that the report should cover and recommend projects and good practices on education technology to inform the report.
We invite you to consult this page to see all the ways in which you can be involved!
In The Jordan TimesLocal news, a Lecture delves into archaeology of architecture and is covered relatively comprehensively for all intents and purposes. Here it is.
The image above is the main entrance of Hallabat Mosque (Photo courtesy of Ignasio Arce)
Lecture delves into archaeology of architecture
By Saeb Rawashdeh
The main entrance of Hallabat Mosque (Photo courtesy of Ignasio Arce)
AMMAN — A relatively new approach in the discipline, the archaeology of architecture was at the heart of a recent lecture held as part of the Department of Antiquities’ 100th anniversary festivities at the department’s headquarters in Amman.
Delivering the lecture, titled “Archaeology of Architecture and the Analysis of the Historical Buildings”, Professor Ignacio Arce from the German-Jordanian University said that the new approach manifests itself in not only excavation, but interpretation, restoration and conservation of archaeological sites and buildings.
Arce, who is also the head of the Spanish Archaeological Mission in Jordan, has been excavating, preserving and presenting finds to visitors of the Umayyad Palace and Medina at the Amman Citadel, Qasr Al Hallabat, Hallabat Mosque, Hammam as Sarrah, Qastal, Deir Al Kahf and Qusayr Amra over a span of a few decades.
“One of the problems that archaeologists face is the lack of written historical sources, so the only reliable source is a monument itself,” Arce said, noting that the role of a scholar is to “interrogate” and find the most reliable source for their claims.
“Inscriptions are not the most valuable proof for archaeologists because when writing people tend to lie,” he stressed, adding that “sometimes it’s better to trust the work, not the words”.
Furthermore, archaeological analysis of inscriptions can confirm whether the text is authentic or it was added later, he said.
Arce said that his goal was also to convey the knowledge produced to the local communities by creating visitor centres and site museums as well as training new generations of stone cutters, masons, architects and archaeologists in the field.
“In archaeological stratification, we have a combined series of natural and anthropic deposits,” Arce said, noting that the term archaeology of architecture was first used by Tiziano Manoni in 1990 to describe methods of gaining historical knowledge from building structures, which can eventually be used in architectural heritage conservation.
Moreover, with the archaeology of architecture methodology, integrated by research on written sources, iconographic sources and oral sources, it is possible to gain the construction history of the artefact and the knowledge of the construction technique used in its production, Arce outlined.
“Therefore, stratigraphy provides a relative dating while chrono-typology provides an absolute dating,” the professor said, adding that he implemented some of these techniques at the Amman Citadel, where he worked in 1995.
“Architectural language is like a written language,” Arce said.
The Italian government has proposed new legislation to crack down on the use of foreign languages in government, business and public life. The draft bill is particularly aimed at the use of English, which it says “demeans and mortifies” the Italian language. The proposed legislation would require employment contracts and internal regulations of overseas businesses operating in Italy to be in Italian.
Obeying such a policy would be difficult for many firms. France introduced a similar law in 1994, which has long been seen as unenforceable. Despite being in legislation for nearly 30 years, almost all multinational companies operating in France are thought to be in breach of the law.
How did it come to be this way? One clue can be found in Oxfam’s recently published inclusive language guide. The charity has attracted attention for describing English as “the language of a colonising nation”. The guide notes that “the dominance of English is one of the key issues that must be addressed in order to decolonise our ways of working”.
It is impossible to deny that the reason that English has its current status is because of historical expressions of power. The colonial expansion of the British empire between the late 16th and early 20th century led to English being spoken widely across the globe. This was often at the expense of local languages which are now endangered or wiped out as a result of the imposition of English.
The cultural and economic dominance of the US since the second world war has led to the further proliferation of English. This is particularly true among younger generations who learn English in order to consume popular culture. Additionally, global interest in business school education has meant that generations of managers have been taught the latest in business ideas and concepts. Often, these originate from the US – and are in English.
Companies who use English as their corporate language often portray it as a common sense and neutral solution to linguistic diversity in business. In reality, it is anything but.
The concept of Business English as a Lingua Franca (BELF) suggests the English used in organisations can be separated from native speakers and the grammatical rules that they impose on it. It emerged in the early 2000s, as management researchers began to investigate how organisations manage language diversity in their international operations. They discovered that although English was frequently used, it was not the same English that is spoken by native speakers.
The former CEO of Volvo, a Swedish company, once remarked that the language of his company was “bad English”. BELF encourages us to think that there is no such thing. If communication takes place successfully, and the message that you wish to transmit is understood, then you have used BELF correctly, regardless of any idiosyncrasies in grammar or spelling.
My own research has shown that although BELF can be used effectively in international environments, when native speakers of English are involved in the communication, they claim authority over how the language should be used. This can exclude those whose use of English does not meet expectations.
Clearly, organisations need to have some form of shared language to be able to effectively communicate to manage their operations. However, research suggests that there are particular benefits associated with using English, rather than something else, as a common corporate language.
English undoubtedly has great practical utility – but rather than understanding it as something neutral, it is important to understand the mechanisms of power and subjugation through which English arrived at its current status. Without reflection, it can easily be used as a tool to exclude, and continues to reproduce colonial mindsets about status and hierarchies. Its ongoing use, however practical, continues that domination.
While there is no globally agreed definition of artificial intelligence, scientists largely share the view that technically speaking there are two broad categories of AI technologies: ‘artificial narrow intelligence’ (ANI) and ‘artificial general intelligence’ (AGI).
General-purpose artificial intelligence (AI) technologies, such as ChatGPT, are quickly transforming the way AI systems are built and deployed. While these technologies are expected to bring huge benefits in the coming years, spurring innovation in many sectors, their disruptive nature raises policy questions around privacy and intellectual property rights, liability and accountability, and concerns about their potential to spread disinformation and misinformation. EU lawmakers need to strike a delicate balance between fostering the deployment of these technologies while making sure adequate safeguards are in place.
Notion of general-purpose AI (foundation models)
While there is no globally agreed definition of artificial intelligence, scientists largely share the view that technically speaking there are two broad categories of AI technologies: ‘artificial narrow intelligence’ (ANI) and ‘artificial general intelligence’ (AGI). ANI technologies, such as image and speech recognition systems, also called weak AI, are trained on well-labelled datasets to perform specific tasks and operate within a predefined environment. By contrast, AGI technologies, also referred to as strong AI, are machines designed to perform a wide range of intelligent tasks, think abstractly and adapt to new situations. While only a few years ago AGI development seemed moderate, quick-paced technological breakthroughs, including the use of large language model (LLM) techniques have since radically changed the potential of these technologies. A new wave of AGI technologies with generative capabilities – referred to as ‘general purpose AI’ or ‘foundation models‘ – are being trained on a broad set of unlabelled data that can be used for different tasks with minimal fine-tuning. These underlying models are made accessible to downstream developers through application programming interface (API) and open-source access, and are used today as infrastructure by many companies to provide end users with downstream services.
Applications: Chat GPT and other general-purpose AI tools
In 2020, research laboratory OpenAI – which has since entered into a commercial partnership with Microsoft – released GPT-3, a language model trained on large internet datasets that is able to perform a wide range of natural language processing tasks (including language translation, summarisation and question answering). In 2021, OpenAI released DALL-E, a deep-learning model that can generate digital images from natural language descriptions. In December 2022, it launched its chatbot ChatGPT, based on GPT-3 and trained on machine learning models using internet data to generate any type of text. Launched in March 2023, GPT-4, the newest general-purpose AI tool, is expected to have even more applications in areas such as creative writing, art generation and computer coding.
General-purpose AI tools are now reaching the general public. In March 2023, Microsoft launched a new AI‑powered Bing search engine and Edge browser incorporating a chat function that brings more context to search results. It also released a GPT-4 platform allowing businesses to build their own applications (for instance for summarising long-form content and helping write software). Google and its subsidiary DeepMind are also developing general-purpose AI tools; examples include the conversational AI service, Bard. Google unveiled a range of generative AI tools in March 2023, giving businesses and governments the ability to generate text, images, code, videos, audio, and to build their own applications. Developers are using these ‘foundation models‘ to roll out and offer a flurry of new AI services to end users.
General-purpose AI tools have the potential to transform many areas, for example by creating new search engine architectures or personalised therapy bots, or assisting developers in their programming tasks. According to a Gartner study, investments in generative AI solutions are now worth over US$1.7 billion. The study predicts that in the coming years generative AI will have a strong impact on the health, manufacturing, automotive, aerospace and defence sectors, among others. Generative AI can be used in medical education and potentially in clinical decision-making or in the design of new drugs and materials. It could even become a key source of information in developing countries to address shortages of expertise.
Concerns and calls for regulation
The key characteristics identified in general-purpose AI models – their large size, opacity and potential to develop unexpected capabilities beyond those intended by their producers – raise a host of questions. Studies have documented that large language models (LLMs), such as ChatGPT, present ethical and social risks. They can discriminate unfairly and perpetuate stereotypes and social biases, use toxic language (for instance inciting hate or violence), present a risk for personal and sensitive information, provide false or misleading information, increase the efficacy of disinformation campaigns, and cause a range of human-computer interaction harms (such as leading users to overestimate the capabilities of AI and use it in unsafe ways). Despite engineers’ attempts to mitigate those risks, LLMs, such as GPT-4, still pose challenges to users’ safety and fundamental rights (for instance by producing convincing text that is subtly false, or showing increased adeptness at providing illicit advice), and can generate harmful and criminal content.
Since general-purpose AI models are trained by scraping, analysing and processing publicly available data from the internet, privacy experts stress that privacy issues arise around plagiarism, transparency, consent and lawful grounds for data processing. These models represent a challenge for education systems and for common-pool resources such as public repositories. Furthermore, the emergence of LLMs raises many questions, including as regards intellectual property rights infringement and distribution of copyrighted materials without permission. Some experts warn that AI-generated creativity could significantly disrupt the creative industries (in areas such as graphic design or music composition for instance). They are calling for incentives to bolster innovation and the commercialisation of AI-generated creativity on the one hand, and for measures to protect the value of human creativity on the other. The question of what liability regime should be used when general-purpose AI systems cause damage has also been raised. These models are also expected to have a significant impact on the labour market, including in terms of work tasks.
Against this backdrop, experts argue that there is a strong need to govern the diffusion of general-purpose AI tools, given their impact on society and the economy. They are also calling for oversight and monitoring of LLMs through evaluation and testing mechanisms, stressing the danger of allowing these tools to stay in the hands of just a few companies and governments, and highlighting the need to assess the complex dependencies between companies developing and companies deploying general-purpose AI tools. AI experts are also calling for a 6-month pause, at least, in the training of AI systems more powerful than GPT‑4.
General-purpose AI (foundation models) in the proposed EU AI act
EU lawmakers are currently engaged in protracted negotiations to define an EU regulatory framework for AI that would subject ‘high-risk’ AI systems to a set of requirements and obligations in the EU. The exact scope of a proposed artificial intelligence act (AI act) is a bone of contention. While the European Commission’s original proposal did not contain any specific provisions on general-purpose AI technologies, the Council has proposed that they should be considered. Scientists have meanwhile warned that any approach classifying AI systems as high-risk or not depending on their intended purpose would create a loophole for general purpose systems, since the future AI act would regulate the specific uses of an AI application but not its underlying foundation models.
In this context, a number of stakeholders, such as the Future of Life Institute, have called for general-purpose AI to be included in the scope of the AI act. Some academics favouring this approach have suggested modifying the proposal accordingly. Helberger and Diakopoulos propose to consider creating a separate risk category for general-purpose AI systems. These would be subject to legal obligations and requirements that fit their characteristics, and to a systemic risk monitoring system similar to the one under the Digital Services Act (DSA). Hacker, Engel and Mauer argue that the AI act should focus on specific high-risk applications of general-purpose AI and include obligations regarding transparency, risk management and non-discrimination; the DSA’s content moderation rules (for instance notice and action mechanisms, and trusted flaggers) should be expanded to cover such general-purpose AI. Küspert, Moës and Dunlop call for the general-purpose AI regulation to be made future-proof, inter alia, by addressing the complexity of the value chain, taking into account open-source strategies and adapting compliance and policy enforcement to different business models. For Engler and Renda, the act should discourage API access for general-purpose AI use in high-risk AI systems, introduce soft commitments for general-purpose AI system providers (such as a voluntary code of conduct) and clarify players’ responsibilities along value chains.
Originally posted on HUMAN WRONGS WATCH: Human Wrongs Watch (UN News)* — Disinformation, hate speech and deadly attacks against journalists are threatening freedom of the press worldwide, UN Secretary-General António Guterres said on Tuesday [2 May 2023], calling for greater solidarity with the people who bring us the news. UN Photo/Mark Garten | File photo…
This site uses functional cookies and external scripts to improve your experience.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.