Entrepreneur Middle East‘s Education Tech published this fantastic story on May 14, 2019, on a certain Hadi Partovi who “Having built (and funded) great startups, this entrepreneur and investor opens up on his mission to teach kids how to code.“
Sitting in a corner of The Third Line Gallery in Dubai’s arts district of Al Serkal Avenue, Hadi Partovi, a tech entrepreneur and angel investor known for his early bets on Facebook, Dropbox, Airbnb, and Uber, is quietly tapping away on his laptop prior to an invite-only fireside chat organized by VentureSouq, a Dubai-based early-stage equity funding platform.
He is here, wearing his signature baseball cap, to present Code.org, a Seattle-based education non-profit dedicated to expanding access to computer science in schools around the world, of which he is the founder and CEO. The main reason for founding this global social-impact initiative is his belief that mastering computer science is no less than a life-giving skill.
Sonia Weymuller, Founding Partner of VentureSouq, introducing Hadi Partovi at a VSQ Talks event at The Third Line Gallery in Dubai.
Yet, before we expand on that, I decide to focus on his approach to investing in early-stage tech startups, knowing that I will hear something different from a phrase that gets thrown around by every startup investor out there: “I invest in people, not ideas.” Partovi also has a people-first investment philosophy; however, not only can he specifically point out to what “investing in people” actually means for him, but he can even measure it.
The Partovi twins, Hadi and his brother Ali, currently the founder and CEO of Neo, a community of young engineers and the world’s top programmers, were jointly investing in startup founders for 17 years (since 2018, they have decided to focus on individual investments), but only in those who passed their coding test. It started with the founders of Dropbox, Partovi explains. “The best tech companies don’t hire a single technical person without putting them through a lot of tests, so why would an investor consider giving hundreds of thousands of dollars without even one test to show that they can do something?” he says. “Most VCs don’t do this because they themselves don’t know the technology, so they just think whether they like the idea or not, and they just take it for granted that a person can do it. If you look at the companies that have succeeded, the idea often isn’t unique, it’s the execution.” He points out that Google was not the first search engine company, Facebook was not the first social networking platform, and Microsoft was not the first company building an operating system- but what set all three of them apart was having the strongest engineers on board.
The Partovi brothers know this from their own entrepreneurial experience. Partovi may come across as being humble, quiet, and almost reticent, but he is a man who was part of the team that founded and sold Tellme Networks, a voice recognition software developer, to Microsoft for US$800 million in 2007. A decade earlier, in 1998, Ali Partovi was a co-founder of LinkExchange, an internet advertising company, that also got acquired by Microsoft for $265 million. The brothers’ website has a page listing their 34 ongoing investments, which include Airbnb, Classpass, and Uber, and 23 successful exits: Dropbox (IPO), Facebook (IPO), and Zappos (acquired by Amazon), to name just a few. If you scroll down this page, you will also find a list of 10 of their unsuccessful investments, and Partovi is open to say that there had been a few bruises before the brothers developed their investment muscle. “I did invest in a bad idea when I liked the person, but if I look at all my investments, the worst ones were the cases where I liked the idea but I didn’t like the entrepreneur, and also there are investment decisions that I chose not to invest even though I liked the entrepreneur,” he says. “And, I’ve made other mistakes too, such as when one of my college classmates wrote to me in 1998, saying that he had just joined a group of friends from his graduate program to start a company, and he was like, ‘They are the smartest people I know.’ I remember thinking that nobody needs another search engine, and that I wouldn’t invest in this company, that he was just the first employee, and that it was going to be a complete failure. Turned out that the company was Google, and he was their first employee and the Chief Technology Officer. He was also in the top of my class in computer science at Harvard. So, if I could go back and invest in all the best computer scientists I had graduated with, I would have made a lot more money, although I have done well, but I wouldn’t have missed the opportunities like this one.”
A key element of his stressing the importance of the engineering talent is that it was a key factor in how the Partovi brothers came to be where they are today. Born in Tehran, Iran, the twins taught themselves to code on a Commodore 64, which has fueled their passion for programming ever since. The family fled to the US in 1984, following the Iranian Revolution in 1979. Upon earning a master’s degree in computer science from Harvard University, Hadi Partovi rose up the executive ranks at Microsoft, before he went about launching his own startups. And now, he believes that every young person around the world deserves to be propelled forward in life by learning this specific skill. “This is a story about opportunity, and how we can expand who has access to that opportunity, what the jobs of the future will look like, and how we can ensure that everyone gets an opportunity,” Partovi says, on why he advocates computer science training, and why Code.org provides coding curriculum for schools around the country. “In the world of accelerating technological change, the most important thing everybody can learn is how to adapt to new technology. Many schools teach technology, but they teach kids how to use it, whereas we want to teach them how to create technology. And learning to create technology is important, not only because it leads to an opportunity, and not only because of the future of the job market, but because for kids, it’s fun and it teaches them creativity. Creativity is such a natural human desire, something that drives adults, and especially youth, but it doesn’t really exist in the school system.”
Since launching in 2013, Code.org has created the most broadly used curriculum platform for K-12 computer science in the United States. Its computer science classes have reached 30% of American students, while its Hour of Code initiative, a global campaign offering a one-hour introduction to computer science, has reached 10% of students around the world. Furthermore, the Code.org team informs that the nonprofit has more than 100 international partners and supports 63 languages in 180+ countries, with students having created 35 million projects on the platform. Importantly, they also state that 48% of Code.org students are underrepresented minorities. In addition to all of this, Partovi is a firm believer that among the future codingskilled founders tackling the world’s biggest problems, we will see many more women than today. According to a teacher survey by Code.org, 46% of users on the company’s Code.org Studio are female. “There is a misconception that this is for boys not for girls, which is totally not true,” Partovi says. “When girls reach 13 or 14, and if they haven’t tried computer science yet, there are too many other things to do and a pressure to be cool, and that this is not cool for them, because of that social stereotype that this is for boys. So, as a girl, if at 13, you haven’t tried it yet, you have to go against that social stereotype. However, for a boy, the social stereotype is that this is for you, that’s fine. It’s hard to go against the social stereotype for anybody, but it is especially hard for a 13-year-old, when you’ve just started learning how to be secure yourself.” To illustrate, Partovi mentions that Google search results for “software engineers” will mainly show the images of men, whereas the results for “students coding” will show men and women in almost equal numbers.
When it comes to other misconceptions about learning computer science, Partovi mentions the notions people falsely have about its scope and complexity. “I’ve probably made this worse, because of the name of our non-profit, but computer science is more than coding,” he says. “Code.org is about a whole bunch of fields that all are technical, and they are all part of computer science, and I believe that all of them belong in primary and secondary education. Just like you think of science, science has biology and chemistry and physics; you don’t teach just one of them.” Partovi adds, “The other misconception is that this is just for rocket scientists. People imagine that computer science is as hard as calculus, but they don’t realize that six-year-olds can start learning it. If you think about math, first grade math is easy, but 12th grade math could be more difficult, and university math is extra hard. Computer science is the same, the first-grade level of stuff is very easy.”
For all these reasons, Partovi, despite coming across as a quiet man, is ready to make some noise with the recent announcement of the single largest expansion of Code.org’s computer science curriculum. Code.org’s Computer Science (CS) Fundamentals course, geared toward primary school, will be translated into the 10 most widely spoken languages in the non-profit’s database – Chinese (traditional and simplified), French, Italian, Japanese, Korean, Polish, Portuguese, Spanish and Turkish- while it will also offer a new offline version of CS Fundamentals to empower schools in low- and no-bandwidth environments to teach computer science to all students.
Expanding into the MENA region is on Partovi’s agenda too. He says, “There are already 500,000 students and about 20,000 teachers in the Arab world using Codeiorg, despite it, for now, being only in English language and only on internet connected computers, meaning that we haven’t done almost any work to overcome the obstacles in the region, we haven’t properly transitioned into Arabic, we don’t yet support use on disconnected computers, we don’t yet work well on smartphones and tablets. Most of the students are in private schools or international schools, because they are using it in English, but it shows that the interest in what we do is already high.”
Region by region, Partovi hopes to achieve Code.org’s mission of changing the educational system, making computer science a permanent part of school curricula. “The education establishment especially doesn’t recognize that this is a field that is as fundamental as mathematics or science,” Partovi says. “Everybody understands that technology is the future, nobody needs to be explained that, and nobody needs to be explained that there is money in technology, and that it is changing everything. What people don’t realize is that when you start learning the alphabet, you can also simultaneously start learning computer science. Nobody questions why we are teaching math or science, but what they do question is whether they should teach computer science. They are not even asking whether they should also teach computer science.”
However, some of Silicon Valley’s most prominent leaders did not need much persuasion- so far, Code.org has been backed by Amazon, Microsoft, Facebook, the Bill and Melinda Gates Foundation, PricewaterhouseCoopers, Infosys Foundation USA, and many others. Furthermore, Partovi recently helped Pope Francis to write a line of code for an app, during an event organized by the Scholas Occurrentes foundation in Vatican City. “Computer science belongs in primary and secondary schools as a fundamental thing, not just for the students who want to become coders, but for those who want to become lawyers, nurses, farmers, because understanding technology is going to be important,” Partovi concludes. “It’s because building the creativity that computer science teaches will be important, and learning the digital skills that will be required in every career will be important. The biggest obstacle for us is this education administrative mindset. Individual teachers and parents recognize this, but nobody thinks that this should be a part of schools. They want their own child to learn to code, and they don’t think about why schools are not teaching it.”
March 12, 2019, we celebrate the 30th anniversary of the
“World Wide Web”, Tim Berners-Lee’s ground-breaking invention.
In just thirty years, this flagship
application of the Internet has forever changed our lives, our habits, our way
of thinking and seeing the world. Yet, this anniversary leaves a bittersweet
taste in our mouth: the initial decentralized and open version of the Web,
which was meant to allow users to connect with each other, has gradually
evolved to a very different version, centralized in the hands of giants who
capture our data and impose their standards.
We have poured our work, our hearts and a lot
of our lives out on the internet. For better or for worse. Beyond business uses
for Big Tech, our data has become an incredible resource for malicious actors,
who use this windfall to hack, steal and threaten. Citizens, small and large
companies, governments: online predators spare no one. This initial mine of
information and knowledge has provided fertile ground for dangerous abuse: hate
speech, cyber-bullying, manipulation of information or apology for terrorism –
all of them amplified, relayed and disseminated across borders.
control: between Scylla and Charybdis
Faced with these excesses, some countries
have decided to regain control over the Web and the Internet in general: by
filtering information and communications, controlling the flow of data, using
digital instruments for the sake of sovereignty and security. The outcome of
this approach is widespread censorship and surveillance. A major threat to our
values and our vision of society, this project of “cyber-sovereignty” is also
the antithesis of the initial purpose of the Web, which was built in a spirit
of openness and emancipation. Imposing cyber-borders and permanent supervision
would be fatal to the Web.
To avoid such an outcome, many democracies have
favored laissez-faire and minimal intervention, preserving the virtuous
circle of profit and innovation. Negative externalities remain, with
self-regulation as the only barrier. But laissez-faire is no longer the
best option to foster innovation: data is monopolized by giants that have
become systemic, users’ freedom of choice is limited by vertical integration
and lack of interoperability. Ineffective competition threatens our economies’
ability to innovate.
In addition, laissez-faire means being
vulnerable to those who have chosen a more interventionist or hostile stance.
This question is particularly acute today for infrastructures: should we
continue to remain agnostic, open and to choose a solution only based on its
economic competitiveness? Or should we affirm the need to preserve our
technological sovereignty and our security?
a third way
To avoid these pitfalls, France, Europe and
all democratic countries must take control of their digital future. This age of
digital maturity involves both smart digital regulation and enhanced
Holding large actors accountable is a
legitimate and necessary first step: “with great power comes great
Platforms that relay and amplify the audience
of dangerous content must assume a stronger role in information and prevention.
The same goes for e-commerce, when consumers’ health and safety is undermined
by dangerous or counterfeit products, made available to them with one click. We
should apply the same focus on systemic players in the field of competition:
vertical integration should not hinder users’ choice of goods, services or
But for our action to be effective and leave
room for innovation, we must design a “smart regulation”. Of course, our goal
is not to impose on all digital actors an indiscriminate and disproportionate
Rather, “smart regulation” relies on
transparency, auditability and accountability of the largest players, in the
framework of a close dialogue with public authorities. With this is mind,
France has launched a six-month experiment with Facebook on
the subject of hate content, the results of which will contribute to current
and upcoming legislative work on this topic.
In the meantime, in order to maintain our
influence and promote this vision, we will need to strengthen our technological
sovereignty. In Europe, this sovereignty is already undermined by the prevalence
of American and Asian actors. As our economies and societies become
increasingly connected, the question becomes more urgent.
Investments in the most strategic disruptive
technologies, construction of an innovative normative framework for the sharing
of data of general interest: we have leverage to encourage the emergence of
reliable and effective solutions. But we will not be able to avoid protective
measures when the security of our infrastructure is likely to be endangered.
To build this sustainable digital future
together, I invite my G7 counterparts to join me in Paris on May 16th.
On the agenda, three priorities: the fight against online hate, a human-centric
artificial intelligence, and ensuring trust in our digital economy, with the
specific topics of 5G and data sharing.
Our goal? To take responsibility. Gone are
the days when we could afford to wait and see.
Our leverage? If we join our wills and
forces, our values can prevail.
have the responsibility to design a World Wide Web of Trust. It is still within
our reach, but the time has come to act.
Global research and advisory firm Gartner has highlighted the top strategic technology trends that organizations need to explore in 2019 in its special report titled “Top 10 Strategic Technology Trends for 2019”.
Gartner defines a strategic technology trend as one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which are rapidly growing trends with a high degree of volatility reaching tipping points over the next five years.
Cearley: The Intelligent Digital Mesh continues as a major driver through 2019
“The Intelligent Digital Mesh has been a consistent theme for the past two years and continues as a major driver through 2019. Trends under each of these three themes are a key ingredient in driving a continuous innovation process as part of a Continuous NEXT strategy,” said David Cearley, vice president and Gartner Fellow.
“For example, artificial intelligence (AI) in the form of automated things and augmented intelligence is being used together with IoT, edge computing and digital twins to deliver highly integrated smart spaces. This combinatorial effect of multiple trends coalescing to produce new opportunities and drive new disruption is a hallmark of the Gartner top 10 strategic technology trends for 2019.”
The top 10 strategic technology trends for 2019 are:
Autonomous things, such as robots, drones and autonomous vehicles, use AI to automate functions previously performed by humans. Their automation goes beyond the automation provided by rigid programming models and they exploit AI to deliver advanced behaviours that interact more naturally with their surroundings and with people.
“As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things, with multiple devices working together, either independently of people or with human input,” said Cearley.
“For example, if a drone examined a large field and found that it was ready for harvesting, it could dispatch an “autonomous harvester.” Or in the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones on board the vehicle could then ensure final delivery of the package.
Augmented analytics focuses on a specific area of augmented intelligence, using machine learning (ML) to transform how analytics content is developed, consumed and shared. Augmented analytics capabilities will advance rapidly to mainstream adoption, as a key feature of data preparation, data management, modern analytics, business process management, process mining and data science platforms.
Automated insights from augmented analytics will also be embedded in enterprise applications — for example, those of the HR, finance, sales, marketing, customer service, procurement and asset management departments — to optimize the decisions and actions of all employees within their context, not just those of analysts and data scientists. Augmented analytics automates the process of data preparation, insight generation and insight visualization, eliminating the need for professional data scientists in many situations.
“This will lead to citizen data science, an emerging set of capabilities and practices that enables users whose main job is outside the field of statistics and analytics to extract predictive and prescriptive insights from data,” said Cearley. “Through 2020, the number of citizen data scientists will grow five times faster than the number of expert data scientists. Organizations can use citizen data scientists to fill the data science and machine learning talent gap caused by the shortage and high cost of data scientists.”
The market is rapidly shifting from an approach in which professional data scientists must partner with application developers to create most AI-enhanced solutions to a model in which the professional developer can operate alone using predefined models delivered as a service.
This provides the developer with an ecosystem of AI algorithms and models, as well as development tools tailored to integrating AI capabilities and models into a solution. Another level of opportunity for professional application development arises as AI is applied to the development process itself to automate various data science, application development and testing functions. By 2022, at least 40 percent of new application development projects will have AI co-developers on their team.
“Ultimately, highly advanced AI-powered development environments automating both functional and nonfunctional aspects of applications will give rise to a new age of the ‘citizen application developer’ where nonprofessionals will be able to use AI-driven tools to automatically generate new solutions. Tools that enable nonprofessionals to generate applications without coding are not new, but we expect that AI-powered systems will drive a new level of flexibility,” said Cearley.
A digital twin refers to the digital representation of a real-world entity or system. By 2020, Gartner estimates there will be more than 20 billion connected sensors and endpoints and digital twins will exist for potentially billions of things. Organizations will implement digital twins simply at first. They will evolve them over time, improving their ability to collect and visualize the right data, apply the right analytics and rules, and respond effectively to business objectives.
“One aspect of the digital twin evolution that moves beyond IoT will be enterprises implementing digital twins of their organizations (DTOs). A DTO is a dynamic software model that relies on operational or other data to understand how an organization operationalizes its business model, connects with its current state, deploys resources and responds to changes to deliver expected customer value,” said Cearley. “DTOs help drive efficiencies in business processes, as well as create more flexible, dynamic and responsive processes that can potentially react to changing conditions automatically.”
The edge refers to endpoint devices used by people or embedded in the world around us. Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to these endpoints. It tries to keep the traffic and processing local, with the goal being to reduce traffic and latency.
In the near term, edge is being driven by IoT and the need keep the processing close to the end rather than on a centralized cloud server. However, rather than create a new architecture, cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.
Over the next five years, specialized AI chips, along with greater processing power, storage and other advanced capabilities, will be added to a wider array of edge devices. The extreme heterogeneity of this embedded IoT world and the long life cycles of assets such as industrial systems will create significant management challenges. Longer term, as 5G matures, the expanding edge computing environment will have more robust communication back to centralized services. 5G provides lower latency, higher bandwidth, and (very importantly for edge) a dramatic increase in the number of nodes (edge endoints) per square km.
Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience.
“Over time, we will shift from thinking about individual devices and fragmented user interface (UI) technologies to a multichannel and multimodal experience. The multimodal experience will connect people with the digital world across hundreds of edge devices that surround them, including traditional computing devices, wearables, automobiles, environmental sensors and consumer appliances,” said Cearley.
“The multichannel experience will use all human senses as well as advanced computer senses (such as heat, humidity and radar) across these multimodal devices. This multiexperience environment will create an ambient experience in which the spaces that surround us define “the computer” rather than the individual devices. In effect, the environment is the computer.”
Blockchain, a type of distributed ledger, promises to reshape industries by enabling trust, providing transparency and reducing friction across business ecosystems potentially lowering costs, reducing transaction settlement times and improving cash flow.
Today, trust is placed in banks, clearinghouses, governments and many other institutions as central authorities with the “single version of the truth” maintained securely in their databases. The centralized trust model adds delays and friction costs (commissions, fees and the time value of money) to transactions. Blockchain provides an alternative trust mode and removes the need for central authorities in arbitrating transactions.
”Current blockchain technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations. This is particularly so with the complex elements that support more sophisticated scenarios,” said Cearley. “Despite the challenges, the significant potential for disruption means CIOs and IT leaders should begin evaluating blockchain, even if they don’t aggressively adopt the technologies in the next few years.”
Many blockchain initiatives today do not implement all of the attributes of blockchain — for example, a highly distributed database. These blockchain-inspired solutions are positioned as a means to achieve operational efficiency by automating business processes, or by digitizing records. They have the potential to enhance sharing of information among known entities, as well as improving opportunities for tracking and tracing physical and digital assets. However, these approaches miss the value of true blockchain disruption and may increase vendor lock-in. Organizations choosing this option should understand the limitations and be prepared to move to complete blockchain solutions over time and that the same outcomes may be achieved with more efficient and tuned use of existing nonblockchain technologies.
A smart space is a physical or digital environment in which humans and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements —including people, processes, services and things — come together in a smart space to create a more immersive, interactive and automated experience for a target set of people and industry scenarios.
“This trend has been coalescing for some time around elements such as smart cities, digital workplaces, smart homes and connected factories. We believe the market is entering a period of accelerated delivery of robust smart spaces with technology becoming an integral part of our daily lives, whether as employees, customers, consumers, community members or citizens,” said Cearley.
Digital Ethics and Privacy
Digital ethics and privacy is a growing concern for individuals, organizations and governments. People are increasingly concerned about how their personal information is being used by organizations in both the public and private sector, and the backlash will only increase for organizations that are not proactively addressing these concerns.
“Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components,” said Cearley. “Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organization’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.’”
Quantum computing (QC) is a type of nonclassical computing that operates on the quantum state of subatomic particles (for example, electrons and ions) that represent information as elements denoted as quantum bits (qubits). The parallel execution and exponential scalability of quantum computers means they excel with problems too complex for a traditional approach or where traditional algorithms would take too long to find a solution.
Industries such as automotive, financial, insurance, pharmaceuticals, military and research organizations have the most to gain from the advancements in QC. In the pharmaceutical industry, for example, QC could be used to model molecular interactions at atomic levels to accelerate time to market for new cancer-treating drugs or QC could accelerate and more accurately predict the interaction of proteins leading to new pharmaceutical methodologies.
“CIOs and IT leaders should start planning for QC by increasing understanding and how it can apply to real-world business problems. Learn while the technology is still in the emerging state. Identify real-world problems where QC has potential and consider the possible impact on security,” said Cearley. “But don’t believe the hype that it will revolutionize things in the next few years. Most organizations should learn about and monitor QC through 2022 and perhaps exploit it from 2023 or 2025.”
Analysts will explore top industry trends at Gartner Symposium/ITxpo 2018 running from March 4 to 6, 2019 in Dubai, UAE.
Gartner Symposium/ITxpo is the world’s most important gathering of CIOs and senior IT leaders, uniting a global community of CIOs with the tools and strategies to help them lead the next generation of IT and achieve business outcomes. More than 25,000 CIOs, senior business and IT leaders worldwide will gather for the insights they need to ensure that their IT initiatives are key contributors to, and drivers of, their enterprise’s success.
The forum of leading global policymakers of the developed and emerging countries in recent years has addressed several strategic themes that challenge Algeria today. These included and were not limited to issues such as the 4th Industrial Revolution, Climate, Migration, Energy and the impacts of terrorism. In Strategies for Adapting to the New World, Prof. Klaus Schwab, president and founder of the World Economic Forum, said: “The 4th Industrial Revolution refers to the fusion of technologies, especially in the digital world, which has significant effects on the political, economic and social systems, it will be a matter to establish a system of common understanding of this industrial revolution”. We shall see such varied themes as to how our lives are to be changed by this Revolution, how business structures will be modified by the new technologies and how rapid technological change will revolutionise work such as what is the future of financial services, how to restart the global economy. Debates about the possible impact of intelligence on defence systems and the future of fuel energy on climate change will take a different meaning. Facing the New World Revolution in 2020 through 2040, including the development of Artificial Intelligence and digital transformation, Algeria has so much to do in the political, security, social, cultural and economic fields of adaptation strategies, if it wants to avoid its marginalisation.
For François-Xavier Sambron, Government institutions and businesses spend a lot of time and energy managing tasks daily, whether it is prioritising, planning over time or overseeing routine workloads. Although familiar, this exercise is nonetheless complicated and ineffective, while efficiency implies breaking down its tasks. In this context, according to this author, we have six digital impacts that revolutionise the function of a political and economic manager, see “My Business-Digital” – February 2018.
First, in traditional management, the manager’s power resided primarily in his ability to distribute or maintain information. This situation is reminded to us by the famous adage that “information is power”. Today, it derives its legitimacy from its ability to link and interconnect collaborators and services among themselves, and its ability to synthesise and sort through the profusion of information received to extract the essentials. This method is exceeded because the “New Generation” manager gives priority to sharing and transparency, looking for above all to empower its employees by opening doors and guiding them in the right direction. By greatly facilitating the flow of information within the company, digitalising is both the primary trigger and contributor to the so-called collaborative management.
Secondly, the manager has to be-first a developer of collective intelligence, a leader, a facilitator, thanks to the information is now widely shared, like not the one who knows but the one who pulls his team. He is the host of a team that seeks to fulfil its objectives by taking maximum advantage of the resources of the company, Putting Interacting with different skills to create value.
Thirdly, the vertical authority based on the hierarchical organisation of the company and the status of the collaborators gradually gives way to a horizontal authority based on the knowledge, competence and reputation of each. The company is now governed by two Forms of authority that act in parallel, one falling within the processes and priorities defined by the management, the other translating the competence of each collaborator. In this context, the manager must rebuild his power horizontally both to communicate and to identify skills, to value them and to organise them and contrary to the past, his leadership is no longer expressed vertically but Horizontally.
Fourth, thanks to the digital revolution, the manager now has a wide variety of tools that allow him to send the right message at the right time to the right collaborator. Whether it is via messaging (instant or not), social networks, collaborative platforms, sending SMS, etc. Besides, the multimedia capabilities of these different means of communication (audio, video, animation) Facilitate the dialogue and encourage the feedback of the collaborators.
Fifth, for the effectiveness of an organisation, new tools such as collaborative applications, project management solutions, business or administrative workflows, etc., make it possible to set and share priorities and objectives, and to ensure the detailed planning of the tasks to be performed as well as the progress of the latter. At the level of the activity monitoring, the digital usually provides many elements of measurement used in its evaluation as to the identification of its malfunctions. The introduction of quantifiable indicators (productivity, costs, quality, deadlines) enables monitoring of the activity over the water and the rapid initiation of corrective actions in case of discrepancies. Thanks to this continuous supervision, the manager is now in a capacity to steer his team finely as each of its members and to follow up the fixed course.
Sixth, technologically, the digital transformation of a company takes place primarily regarding the human resource, pillar of management, making it necessary to accompany all the collaborators in a transition of which they will be the main actors. In this context, the manager occupies the first role to engage his team in this significant project and encourage each employee to take their place in front of Explain the merits of these changes, reassure the collaborators about their future and value the role of each in this mutation.
Political, entrepreneurs, researchers, ordinary citizens, we all live today in a society of electronic communication, plural and immediate that compels us to make decisions in real time. The control of time being the primary challenge of this century, any inadequacy of these mutations would further isolate the country. It, therefore, needs an adaptation strategy in the face of new global and energy changes with the advent of the Fourth Economic Revolution that will be based on digital, technological news, green industries with an energy mix between 2020 through 2040. As the world advances, artificial intelligence and digital revolutionising both international relations, the management of States, institutions, businesses and relationships that are personal, many leaders could need a Cultural Revolution (an upgrade) to adapt to the arcane of the new economy. The majority of organisations must move away from the utopian patterns of the past of the years 1970 through 1990 and be ready at the dawn of a veritable planetary revolution. Emerging countries have no future if they do not promote good governance and the knowledge economy, which must adapt to these new mutations, the two fundamental pillars of the development of the 21st century. For Algeria, the 2016 – 2018 World Economic Forum report is far from the country’s vast potential has a lesson nevertheless to be learned; that is the balance sheet is very mixed despite the importance of public spending. Furthermore, and according to an OECD report, Algeria would spend twice as much to have twice as many results as compared to similar countries in the MENA region.
Algeria suffers from a closed business environment that is believed to be resolved by legislation when it comes to tackling functioning of a business company: a bureaucratic financial system and an inadequate socio-educational system together land transactions, for instance, tend to be causing high costs. There is still much work to be done, referring to political, social, cultural and economic factors to liberate creative energies, to attract the real creators of local and international private wealth confronted with the bureaucratic burden and the lack of visibility and coherence of socio-economic policy. This implies a specific strategic objective, adapting to the new World at least ten years and another governmental, institutional organisation around essential ministries and large regional eco-poles. Some would still choose the wrong way in their economic policy, which could lead the country to a stalemate and considerable financial losses, by ignoring the new global mutations. It is thus, necessary to go for a new model of consumption, because of the significant strategic error of reasoning at the global level in a linear consumption model, and not continue to live from the illusion of the material age. This would urgently require some cultural change of all business leaders. I would draw the Government’s attention to the non-coherent current policy that may lead the country to accelerate its foreign exchange reserves’ depletion, without however sorting out the real problems of the country’s development. It is all about the technological and managerial accumulation within the framework of the values of financial capital as only one way to avoid monetary illusion.
An article of Menna A. Farouk@MennaFrouk91 posted on Al Monitor of August 28, 2017 is about girl’s education in Egypt. The author who as an Egyptian journalist writing about social, political and cultural issues in Egypt touched here on what is perhaps one of the most critical aspect of life in Egypt of today. It is about a technical school trying to increase the number of Egyptian women in the informatics coding industry.
Questions such as whether “social mixity” and “better schools” in Europe were antinomic objectives remain to date unanswered; in Egypt, it seems that some sort of an answer to this question was formulated per this school opening. Is this allegory sustainable? We would like to argue that it is at least debatable. Thoughts?
Facebook/almakinah Women take part in AlMakinah’s GearUp course at the American University in Cairo, Egypt, Aug. 20, 2017.
An Egyptian tech school is shattering stereotyping in the tech industry by enabling more women to learn programming, coding and elements of building websites.
AlMakinah, a Cairo-based tech school, has initiated a new program exclusively for women titled “The Women in Tech Track” with the aim of addressing the imbalances in the gender ratio in Egypt’s tech industry.
“The program includes sessions on front-end web development, data visualization, digital marketing as well as workshops on confidence and resilience and talks by industry experts about the tech field,” Hoda Hamad, the manager of the program, told Al-Monitor.
Hamad was a participant in one of AlMakinah’s tech classes. After noticing that the AlMakinah classes were mostly dominated by men, she decided to join the school’s team and launch a program for women “to rebrand the field and remove any social stereotypes set by society.”
“In our last full-stack program, we only had four girls in the class. I was one of them. And after attending this class, I decided to change that,” she said.
There is now a class of 17 bright women, 15-36 years old, from different educational and economic backgrounds. “They are currently learning front-end web development. They started on Aug. 20 and the program will go until Aug. 31,” she noted.
AlMakinah was launched two years ago to bridge the gap between the requirements tech companies seek from employees and the skills job seekers possess. Since then, the tech school has launched several programs that teach web development in a practical way that satisfies the needs of the companies looking to hire new people.
“Apart from teaching women the basics of programming and building their own websites, the Women in Tech Track seeks to get those women access to numerous job opportunities, which include remote jobs and where they could gain a stable income working from home,” Hamad said.
The Women in Tech Track also launched a hashtag on Twitter, #YouCodeGirl. “The main aim of the hashtag is to encourage women to penetrate the tech industry as well as to trigger a supportive culture among their communities,” Hamad added.
According to statistics, the representation of women in the tech industry is significantly low. In the United States, the percentage of computing jobs held by women has fallen over the past 23 years, according to a 2015 study released by the American Association of University Women, a nonprofit organization promoting gender equality. During that period, the number of women earning computing degrees also declined.
In Europe, women only make up 30% of the 7 million people working in the continent’s digital sector and they are underrepresented at all levels, especially in decision-making positions, according to a report released by the World Economic Forum in 2016.
“The gender gap in STEM [science, technology, engineering and math] industries remains stubbornly high, with women representing around 26% of the STEM workforce in developed countries,” the report added. The report also stated that the figure is even lower in developing countries.
However, according to UNESCO statistics as reported in US News & World Report, more women have recently started to study STEM in the Arab world. “In the Gulf region, women comprise 60 percent of engineering students in universities, compared with 30 percent in the US and Europe.”
According to a report by the American Association for the Advancement of Science, Egyptian female students are beginning to enter the tech field, traditionally seen as male-dominated.
Dalia Hisham, a 27-year-old programmer, said that women in Egypt are bravely facing the challenges of being in the tech industry, including the inadequate number of female role models in the field, gender bias in the workplace and unequal growth opportunities compared to men.
“But such challenges are gradually being overcome by women as they are now creating role models and mentors in the field and are having more growth opportunities,” Hisham told Al-Monitor.
“It is now getting better for women to enter the tech field in Egypt, especially amid the launch of initiatives like Women in Tech Track that seek to break gender bias and encourage more women to infiltrate into the industry,” the young programmer added.
The situation in the GCC countries is improving after moves to compromise were made by all parties. Business as usual is soon to be had and in so doing the Internet of Things (IoT) would be top of everyone’s agenda, public and private organisations alike. This has like everywhere else the potential to unlock in the GCC region up to 11% driving all economic growth in every country, according to A. T. Kearney’s latest report on the IoT in the GCC for a brighter, more sustainable future. AT Kearney is an American global management consulting firm that focuses on strategic and operational CEO-agenda issues facing businesses, governments and institutions around the globe.
The local media have profusely covered this topic for some time, and according to the above report, the GCC’s governments have before the Qatar crisis started, been investing in it mainly within their respective policies of diversifying their economies or as a help towards that goal away from dependence on oil. Investing in IoT does offer a variety of avenues for economic growth. It has also the potential to address many of the region’s challenges to the point where per the same report, the region’s IoT solutions market would, by 2025, be worth $11 billion, thus generating potential value for the economy anticipated to nearly $160bn.
IoT in the GCC for a Brighter, more Sustainable Future
The Internet of Things has the potential to unlock up to 11 percent in incremental GDP, driving growth and prosperity across the region for the next decade.
Citizens in the Gulf Cooperation Council (GCC) have long enjoyed a standard of living known to be among the best in the world, thanks to the region’s wealth of abundant natural resources. However, the slump in oil and gas prices has shed light on a number of challenges and calls for transformational efforts from governments, businesses, and individuals.
Low oil prices may mean the region’s governments will struggle to balance their budgets and find the resources needed to address other issues. For example, serving the region’s young, ambitious population will require generating a plethora of attractive jobs and providing the education needed to bridge any skill gaps. Healthcare will also be in the spotlight as the population has an array of health issues, including high rates of adult obesity. In addition, energy and access to water could become more challenging as the region has high rates of consumption. For example, per capita water consumption in the United Arab Emirates is about 80 percent higher than the global average.
The region’s governments know they need to wean their economies away from oil to ensure continuity in high standards of living, and they are making progress in their diversification and transformation efforts. Large investments in innovation, such as Saudi Arabia’s Public Investment Fund infusing money into Uber and Softbank and the anticipated launch of the $1 billion e-commerce company Noon, are testament to the importance of innovation on the business and national agendas of GCC countries.
The Internet of Things—the exponentially growing ecosystem of connected devices and systems—offers a major avenue for innovation and has the potential to address many of the region’s challenges while also spurring economic growth. The seamless combination of embedded intelligence, ubiquitous connectivity, and deep analytical insights creates a platform for unique and disruptive value for companies, individuals, and societies.
Here we discuss what IoT is, how it can transform major business sectors across the region, and what it will take for companies, governments, and the high-tech industry to attain the ultimate connectivity—bringing together the tremendous potential of this unifying phenomenon.