Material Removes Ice Buildup Without Power or Chemicals

by Karl-Lydie Jean-Baptiste, Massachusetts Institute of Technology, Cambridge | The passive, solar-powered system could prevent freezing on airplanes, wind turbines, power lines, and other surfaces. Images of a droplet on a surface show the process of freezing (top row), during which condensation temporarily forms on the outside of the droplet as it freezes. The next two rows show the droplet thawing out on a surface coated with the new layered material. In the middle row, the droplet is heated by the coating immediately upon freezing, and the dashed lines show where the freezing at top is just catching up with the thawing from below. The bottom row shows a slower thawing process. Under identical conditions, the droplet stays frozen without the new coating. (The Varanasi Research Group)

From airplane wings, to overhead power lines, to the giant blades of wind turbines, a buildup of ice can cause problems ranging from impaired performance all the way to catastrophic failure. Preventing that buildup usually requires energy-intensive heating systems or chemical sprays that are environmentally harmful.

A completely passive, solar-powered system was developed that is based on a three-layered material that can be applied or even sprayed onto the surfaces to be treated. It collects solar radiation, converts it to heat, and then spreads that heat around so the melting is not just confined to the areas exposed directly to the sunlight. Once applied, it requires no further action or power source. It can even de-ice at night using artificial lighting.

The usual de-icing sprays for aircraft and other applications use ethylene glycol, a chemical that is environmentally unfriendly. Airlines don’t like to use active heating, both for cost and safety reasons.

As an alternative, the system captures the heat of the Sun and uses it in a passive approach. It is not necessary to produce enough heat to melt the bulk of the ice that forms; all that’s needed is for the boundary layer, right where the ice meets the surface, to melt enough to create a thin layer of water, which will make the surface slippery enough so any ice will slide off.

The top layer of the material is an absorber that traps incoming sunlight and converts it to heat. The material absorbs 95 percent of the incident sunlight and loses only 3 percent to reradiation. In principle, that layer could in itself help to prevent frost formation, but with two limitations: It would only work in the areas directly in sunlight, and much of the heat would be lost back into the substrate material — the airplane wing or power line, for example — and would not help with the de-icing.

To compensate for the localization, a spreader layer was added — a very thin layer of aluminum, just 400 micrometers thick, that is heated by the absorber layer above it and very efficiently spreads that heat out laterally to cover the entire surface. The material was selected to have thermal response that is fast enough so that the heating takes place faster than the freezing.

Finally, the bottom layer is simple foam insulation to keep any of the heat from being wasted downward and keep it where it’s needed at the surface. In addition to passive de-icing, the photothermal trap stays at an elevated temperature, thus preventing ice buildup.

The three layers, all made of inexpensive, commercially available material, are then bonded together and can be bonded to the surface that needs to be protected. For some applications, the materials could instead be sprayed onto a surface, one layer at a time.

Karl-Lydie Jean-Baptiste is a Media Relations Assistant, Freelance Writer and Content Manager at the Massachusetts Institute of Technology (MIT). For more information, contact Karl-Lydie Jean-Baptiste at kjeanbap@mit.edu, 617-253-1682 or Linkedin.

 



Prevalent Challenges Implementing Artificial Intelligence in the Workplace

by O’Kelly E. McWilliams III & Jennifer R. Budoff | A high-level overview of some of the more prevalent challenges employers may encounter when deploying AI in the workplace, and guidance on the proactive steps employers should consider. (Image by Gerd Altmann from Pixabay)

As more and more companies begin to utilize Artificial Intelligence (AI) in the workplace, it becomes increasingly important for employers to understand both the risks and rewards that accompany this new technology. The impact of Artificial intelligence in the business arena cannot be overstated with firms spending nearly $24 billion on AI-related acquisitions and mergers in 2018. Right now, global spending on cognitive and artificial intelligence (AI) systems is forecast to continue its trajectory of robust growth to reach $77.6 Billion in 2022 as businesses invest in projects that utilize cognitive/AI software capabilities.

While the use of AI can be an efficient and cost effective means for employers to handle tasks such as talent acquisition, compensation analysis, and the completion of administrative duties, it is not without its challenges. Rather, as discussed below, the use of AI may also bring with it the potential for implicit bias and disparate impact toward protected categories, particularly in the context of gender and age. In addition, if AI is not properly introduced into the workforce, it may foster concerns among employees that the company no longer values their work or cause anxiety about employee job security. This article sets forth a high-level overview of some of the more prevalent challenges employers may encounter when deploying AI in the workplace, while also offering guidance on the proactive steps employers should consider when implementing or utilizing AI.

The Growing Use of AI
AI is often used in the workplace to assist employers with recruitment through the use of algorithms to make hiring decisions. According to a 2017 survey by the talent software firm CareerBuilder, approximately 55 percent of U.S. human resource managers opined that AI will become a regular part of their work within the next five years. Similarly, as reported by the Society for Human Resource Management following a 2018 survey conducted of over 1,100 in-house counsel, human resource professionals and C-suite executives, 49 percent of respondents said that they already use AI and advanced data analytics for recruiting and hiring. While the use of AI may assist these companies, the technology may not always eliminate bias in the recruitment process.

The Potential for Implicit Bias and Disparate Treatment
Title VII of the Civil Rights Act of 1964, as amended, prohibits employers from discriminating against an individual on the basis of race, color, sex, national origin or religion with respect to all aspects of employment. Pursuant to the Bureau of Labor Statistics Monthly Labor Review, Occupational Employment Protections to 2022, the growth of employment in computer science and engineering jobs is more than double the national average. Despite the surge in this field, women and minorities continue to be under-represented. In 2016, the U.S. Equal Employment Opportunity Commission stated that diversity in the high-tech sector is “a timely and relevant topic for the Commission to investigate and address.” Since then, some companies have evaluated using AI in the recruitment process to increase diversity in their workforce. As discussed below, however, it may ultimately have the opposite effect.

As reported by Reuters, in 2017, the online tech giant Amazon announced it would be shuttering an experimental hiring tool it had been working on for the past several years. Amazon had hoped to use the tool to review job applicants’ resumes and streamline the search for top talent. Unfortunately, it was discovered that the computer program showed a bias toward women when it came to recruitment for software developer jobs and other technical positions. According to Reuters, Amazon trained its computer programs to vet applicants through patterns in resumes submitted to the company during a 10-year period. Due to the fact that the tech industry remains a male-dominated field, the majority of resumes submitted during that time came from men. As a result, the AI system determined that male candidates were preferable and subsequently penalized resumes that included the word “women’s” or downgraded graduates from certain all-women’s colleges. Although Amazon edited the programs to prevent these occurrences, the company ultimately decided to discontinue the program, noting that company recruiters never used the software to evaluate candidates.

Similarly, employers considering the implementation of AI in the workplace should be cognizant of the potential for age discrimination claims. The Age Discrimination in Employment Act (ADEA) prohibits age-based discrimination against applicants or employees age 40 or over. The use of AI in the workplace to streamline certain activities could result in a disparate impact on an older workforce and potentially expose a company to discrimination claims. Specifically, if older workers struggle to adapt to new technology, or implicit bias results in the perception that younger employees are better suited to handle the changes than their older counterparts, employees age 40 or older may face adverse employment actions as a result. Another potential for bias could result if a company undergoes a reduction in force as a result of the introduction of AI into the workplace, as older workers may be laid off at a disproportionate rate to their younger counterparts if AI is not programmed to account for age-related considerations.

So how can employers reap the benefits of AI without also exposing themselves to the potential for liability? Below are some best practices for employers to keep in mind when using or implementing AI in the workplace.

Best Practices

  • Engage third parties to assist in selecting AI software utilized for recruiting to ensure that the programs selected mitigate the effect of unconscious bias;
  • Devise an action plan on how best to present the topic to current employees without creating an alarmist environment;
  • Keep in mind the implications of the federal Worker Adjustment and Retraining Act, Notification Act and similar state laws that require an employer to provide advance notice of job loss;
  • Be aware of the protection afforded to workers under the National Labor Relations Act for engaging in concerted activities in response to changes in the workplace;
  • Be cognizant of invasion of privacy claims stemming from the over-collection of data through AI.

While the use of AI in the workforce continues to grow, and a recent study conducted by McKinsey Global Institute noted that as much as one-third of the United States workforce could be displaced by automation by the year 2030, the shift to automation will not happen overnight, affording time to create policy changes and increased regulation in areas such as layoffs, severance pay and training.  Against this backdrop, employers should ensure that they consider the implications of AI and the best practices recommended above when implementing new and innovative solutions in the workplace.

O’Kelly E. McWilliams III, a member at Mintz, advises US and International companies on a wide array of business and employment law issues. He focuses his practice on employment, agreements, disputes and compensation matters, and regularly provides guidance on managing employee relationships and has helped many companies investigate and respond to allegations of employer misconduct.

Jennifer R. Budoff, an associate at Mintz, provides clients with representation and counsel on a broad range of employment matters, with significant experience advising and defending employers in discrimination, retaliation, harassment, and wrongful termination matters, including the representation of employers in actions before Administrative Agencies and state and federal courts.



The Reality of Robots in Everyday Life

As part of one of the panels run by NESTA during the creation of the Longitude Prize, and, more recently, as a contributor to the BBC Horizon documentary on the prize, I have been asked a lot about how robots, particularly ‘autonomous robots’, will change our lives in the future (Image by Stefan Keller from Pixabay).

Autonomous robots are those which are able to make decisions for themselves, as opposed to more traditional industrial robots which have to have their every move pre-programmed by a human expert. The decisions made by autonomous robots are currently quite simple – mostly related to how to move, or where to move to – but these will increase in complexity as our understanding of the use of artificial intelligence (AI) on robots in the real world increases.

Predicting the future, particularly where science and technology is involved, is a difficult task, but some clear trends emerged during the Longitude discussions. Under the ‘Paralysis’ topic, the use of exoskeletons to provide mobility and strength was a prominent idea. Exoskeletons are simply robots you wear. They may not look like a traditional robot, but they have all the necessary parts (sensors, actuators and the ability to be programmed to automate a task) to be considered one. Future exoskeletons will also be more autonomous, able to anticipate situations and prepare responses in advance.

Under the ‘Dementia’ Longitude Prize topic, we discussed the use of autonomous robots to support people with dementia living in their homes for longer. This is just one aspect of the more general theme of using robots for assisting ageing people to live independently for longer, improving their quality of life, and reducing their impact on health services. Such robots may do anything from providing reminders and connecting residents to remote loved ones and carers, to monitoring how, where and when people move around their house (looking out for falls or abnormal behavior). A much longer-term aim is for robots to provide physical assistance, either with movement (standing up, walking) or with household tasks such as cooking and cleaning. Sadly the science and engineering, both in AI and the ability to build and control suitable robot bodies for such a range of tasks, is still many tens of years away at least.

Away from the Longitude Prize areas, we will see autonomous robots – and related technology – appear in many aspects of our lives. This may be in forms that we easily understand as robots today (e.g. machines that clean floors, carry pallets in warehouses or monitor oil pipelines) or as elements within other systems (e.g. driver assistance aids, surgical support tools, or prostheses). The most important thing to understand is that robot technology won’t appear as fully formed humanoid robots with human-like intelligence capable of doing anything. This, particularly the intelligence part, is a science fiction dream we’re not even close to being able to describe as a well-formed problem, let alone create solutions for. Instead, robot technology will emerge through special-purpose tools which will allow us to increase the quality of our lives in many aspects, whether it’s caring for our loved ones, or making our businesses more productive.

Dr Nick Hawes is Senior Lecturer in Intelligent Robotics in the School of Computer Science. (You can also follow him on Twitter – @hawesie.)

 



Looking Beyond the “Future-Proofed” Future

by Brian Kowalchuk, AIA, LEED AP, Global Director of Design, HDR | But in the near future, I think we’ll move away from the idea of labs as a collection of enclosed spaces containing specific, although somewhat changeable, functions. Instead, I think labs will be the places where people connect with one another—to gather, retrieve, analyze and discuss data and also engage in new technologies. (image, Future Labs Explores the Near Future for AR and VR at Tech 2025)

Architects, planners and scientists often raise the issue of “future-proofing” research buildings, and planning for the “lab of the future,” assuming we can look out 20 to 50 years and understand what will drive new ideas and scientific advancement.

I don’t think we can.

Just look at the digital marketplace and recent applications of Artificial Intelligence (AI). Look at the ways our children gather information—we are living in a time of rapid change.

When we think of a lab today, most of us think of a rectangular building with modular, flexible wet and dry laboratories zoned with more or less robust infrastructure with offices nearby—all neatly stacked on upper floors. Glass walls offer glimpses from corridors into labs, what we popularly call “science on display.”

Often, the labs are arranged around a light-filled atrium, with connecting bridges and stairs, designated collaboration spaces (although often devoid of collaborators) and a spectacular and grand public ground floor—all intended to enhance trans-disciplinary interaction and foster new connections.

But in the near future, I think we’ll move away from the idea of labs as a collection of enclosed spaces containing specific, although somewhat changeable, functions. Instead, I think labs will be the places where people connect with one another—to gather, retrieve, analyze and discuss data and also engage in new technologies. Labs will be hubs for an exchange of knowledge, connecting experts from around the world in both physical and virtual space, and they will be available and accessible to anyone, anywhere, anytime.

I believe that much of the physical experimentation traditionally at the heart of laboratories will look very different than it does now, perhaps even moving out of the lab into science “garages,” remotely located from those analyzing results.

With the advent of AI and automation, and the sheer quantity of data already accessible, scientists will focus on assuring the quality of the data, and figuring out how best to use it to generate new ideas and products. Data analysis, the ability to explore and develop new ideas with others, and especially the applications that result from these exchanges, is what will drive science.

Labs will become tools for this interchange and developing new applications. They will become idea factories and look nothing like the ones we are designing today.

In my upcoming columns, I will explore new ways to approach the design of labs—and how we, as lab designers, can enhance people’s experiences as they pursue science, use technology, and most importantly, engage one another.

Brian Kowalchuk, AIA, LEED APBrian Kowalchuk, AIA, LEED AP, is Global Director of Design, HDR. Throughout his career, Brian has transformed complex programs into highly functioning and striking facilities that advance the missions of leading science and technology organizations. He embraces the inherent challenges of high-performing technically advanced buildings to develop architecture that creates connections among people, place and technology. In recognition of these efforts, he has received numerous AIA Awards, as well as seven prestigious Lab of the Year Awards from Laboratory Design and R&D Magazine. A frequent guest speaker at industry conferences, including the Lab Design Conference, his work has been featured in a wide range of professional journals including Architecture MagazineInterior Design Magazine, Contract, BioExecutive and Fast Company.



Improving Our Ability to See Risk Using Visual Literacy

by Doug Pontsler | Visual literacy is all about what you see, what it means, and what you do about it. Visual literacy has been taught in art education for years and provides a methodology for close looking. By recognizing that we are often influenced by our expectations of what we will see, our history in seeing things in the past, and a natural bias to pay attention to some things and not others we often look, but don’t see. (Image by geralt on Pixabay)

Every day, we ask our people to perform various tasks as part of our safety processes that require “seeing.” These tasks may be conducting a hazard hunt, completing a risk assessment, or performing an observation. Every day, we train our people to be proficient in the things that they do. It may be classroom training, on-line training, or on-the-job training. So, when was the last time any of us received training on how to see? If you are like most, never.

Sighted people are accustomed to seeing because they have been doing so their entire lives. We have confidence that when we look at something, we see what is there. However, what if that isn’t as true as we believe it to be? Is it possible that while we may look at something, we might not actually be seeing everything that we could be seeing? And what if there was a way to improve our ability to see the things that are right in front of us?

Visual literacy is all about what you see, what it means, and what you do about it. Visual literacy has been taught in art education for years and provides a methodology for close looking. By recognizing that we are often influenced by our expectations of what we will see, our history in seeing things in the past, and a natural bias to pay attention to some things and not others we often look, but don’t see. It’s why we often fail to see a potential problem even though we have walked by it a hundred times until it’s too late. Or that we are so familiar with our surroundings that we can no longer see the forest for the trees. The result is that incidents themselves begin to inform us of the things we should be seeing and fixing.

Think about the number of hazard hunts that have been conducted in work areas only to miss the hazard that results in the next incident. Think about the design for safety review that was just completed on a new piece of equipment, but still an incident occurs. Think about the pre-job risk assessment that was completed ahead of the task, but still missed an important hazard that wasn’t identified. It is one thing to know about the hazards to look for, but another to see them.

Created by the Toledo Museum of Art (TMA), COVE: Center of Visual Expertise (COVE) is focused on leveraging the lessons taught in art education to improve our safety processes by improving our ability to see what is in front of us. Methodologies exist in visual literacy and processes developed by TMA and COVE to teach individuals how to move past “looking” to “seeing,” and leading to a more complete interpretation of the environment we are dealing with. We can then control, if not eliminate, the hazards that are in front us, and not wait to let an incident inform us that they exist.

Companies are now learning how visual literacy can improve their ability to execute critical safety processes, and are integrating visual literacy into their training agendas. As one recent participant in a visual literacy workshop commented, “You will never see things the same anymore.”

Doug Pontsler is the chairman and managing director at CO VE: Center of Visual Expertise. www.covectr.com. He is vice president of operations sustainability and environmental, health and safety for Owens Corning before joining COVE. In this leadership role, his role was expanded in 2011 to include responsibility for foundational compliance and sustainability operations performance. Pontsler serves as a member of the National Safety Council Board of Directors and is chairman of the National Safety Council Campbell Institute.



Teaching Quality Health and Physical Education

by Dean Dudley, Amanda Telford, Claire | This practical new text will help pre- and in-service teachers to develop and implement quality health and physical education experiences in primary schools.

Teaching Quality Health and Physical Education
Ⓒ 2018ISBN 9780170387019Edition 1 344 Pages
AU / NZ
Published: 2017 by Cengage Learning Australia
Author/s: Dean Dudley / Charles Sturt University, Bathurst
Amanda Telford / RMIT University
Claire Stonehouse / Deakin University
Louisa Peralta / University of Western Sydney
Matthew Winslade / Charles Sturt University

 

Taught well, Health and Physical Education can provide purposeful, stimulating and challenging learning experiences. It can help children to develop sophisticated understanding, skill and capabilities through their bodies and to see greater meaning in not only what they are learning but also their wider lives; and it can enrich all other aspects of the curriculum.
This practical new text will help pre- and in-service teachers to develop and implement quality health and physical education experiences in primary schools. It introduces the general principles of teaching and learning in Health and Physical Education and explains why this learning area is an important part of the Australian Curriculum. Chapters then discuss considerations and practical implications for teaching both health and physical education using a strengths-based approach.
Packed with evidence-based and research-informed content, this valuable text also includes numerous examples and activities that help you bridge the gap from theory to real-world practice. Above all, it will give educators the confidence to teach primary health and physical education so that every child benefits.

 

Contents

Part 1: Introduction to the area
1. Introducing Health and Physical Education
2. Understanding quality Health and Physical Education
3. Overview of the Australian Curriculum: Health and Physical Education
4. Authentic learning and assessment in primary Health and Physical Education.
Part 2: Understanding and teaching about personal, social and community health
5. Pedagogies and issues in teaching for health
6. Exploring identity, help-seeking behaviour and decision making
7. Communicating for healthy relationships and wellbeing
8. Whole-school approaches to promoting health.
Part 3: Understanding and teaching about movement and physical activity
9. Planning for developmentally appropriate learning
10. Moving for purpose: skills, knowledge and values
11. Moving for life: experience and expression.

 

About the author (2017)

Dr Dean Dudley is a former Health and Physical Education Head Teacher and Director of Sport and now works as a physical education academic at Macquarie University. He is Senior Lecturer and Researcher of Health and Physical Education at Macquarie University, as well as Vice President (Oceania) of the International Federation of Physical Education and Chief Examiner (Personal Development, Health, and Physical Education) for the NSW Board of Studies and Teacher Education Standards. Dean was Expert Consultant on the Quality Physical Education Guidelines for Policymakers published by UNESCO in 2015. His research is focused on the assessment and reporting of physical education and the development of observed learning outcomes pertaining to physical literacy.

Amanda Telford is Associate in the School of Education at RMIT University. In addition to experience as an academic and as a health and physical education teacher, Amanda has experience as a company director of an organisation consisting of a network of over five thousand health and physical educators. She has been an advisor for state and federal governments in the area of Health and Physical Education and was involved in the development of the 2004 National Physical Activity Guidelines for children and young people. Her research focuses on the influence of family, community and school environments on youth physical activity behaviour.Claire Stonehouse lectures at Deakin University in Health Education, Student Wellbeing and Sexuality Education in both primary and secondary pre-service education. Claire has worked in many sectors of the community, and has experience writing curricula and educating young people. Her areas of interest include: sexuality education; the educational impact that parents have on their children; and opening up conversations about mental health.

Louisa Peralta is Senior Lecturer of Health and Physical Education in the Faculty of Education and Social Work at the University of Sydney. As an academic, Louisa teaches in the areas of primary and secondary Health and Physical Education and professional practice studies. Her teaching, research and publications focus on school-based programs for improving students’ physical activity levels and motivation, improving adolescent health literacy through whole school approaches, and designing and delivering professional learning experiences for preservice and inservice Health and Physical Education teachers.

Matthew Winslade is Associate Head of the School of Teacher Education and Course Director for Health and Physical Education at Charles Sturt University. Prior to moving into the tertiary sector he was both a Head Teacher in the state system and a Director of Sport in the Association of Independent Schools. His current research activities include evaluating school- and university-based health and physical activity programs, and the development of intercultural competency in pre-service teachers. Matt currently divides his time between Australia and Samoa, working closely with community groups and sporting organisations at both school and university level.



Preparing yourself for Careers of the Future

You don’t have to be at the top of your class to prepare yourself for careers of the future. However, you have to be well rounded in most disciplines and be dedicated to your studies and open to suggestions from your teacher or your professor. Today’s school administrators also need to rework their curriculum to include both technical and soft skills that will challenge and enable students to succeed in the future world of automation.

It doesn’t matter what your current career path is; you use skills in arts, science, technology, engineering, or math in one form or another every day. More knowledge in these areas of studies will no doubt help you in the careers of the future. And believe me, no one knows what careers of the future holds. What we do know is that as a High School student or College student, it’s imperative you force yourself to be proficient in arts, math, science, and technology. In the future world of automation, it will be very hard (but not impossible) to get by without some knowledge of arts, math, science, and technology.

Academics

You can start preparing yourself for careers of the future through academic courses. Here are some of the core courses to get you started while you’re still in high school or college.

  • Artificial Intelligence
  • Statistics
  • Computational Biology
  • Molecular Biology as a Computational Science
  • Geography
  • Immunology
  • Physics
  • Chemistry
  • Computer Programming
  • Web Programming
  • Data Programming
  • Computer Science Principles
  • Computer Assisted Art
  • Research Methods
  • Introduction to Algorithms
  • Identities: Race, Class, Gender, and Sexuality in Anthropology
  • Economics
  • Probabilistic Robotics
  • Probability and Mathematical Statistics
  • Mathematical Reasoning
  • Electronics
  • Environmental Science
  • Political science
  • Technical writing
  • Creative writing

Work Experience & Hobbies
Other ways to prepare yourself for careers of the future is through work experience and engaging is various hobbies. Some of these activities include but not limited to:

  • Fundraising event or other project involving budgeting and math skills.
  • Participate in a lobbying and census project to gain experience conducting interviews, analyzing data, and writing report of the project.
  • Volunteer at a math or science camp or after-school program.
  • Participate in a team programming class to develop software of interest in a team environment.
  • Before you recycle your old laptop or desktop computer, Google how to take them apart and put them back together.
  • Ask people close to you to hook you up for a summer intern at a place you really love to work at. The experience is what you’re shooting for, but it will be great if you can talk to the administrators into covering your transportation and lunch money for the duration of your intern.
  • Be a contributing member of your school club, especially robotics, math or science clubs.
    Push yourself to the limit on a project for a science fair.

There is no better way to prepare yourself for careers of the future than to be well rounded. A balance of exercise or sporting activities combined with a rigorous art project, coding competition with friends in modern computer languages such as JavaScript, Python, Java, SQL, Ruby, C#, C++, PHP are highly recommended.



Are We Born With Knowledge?

by Will Lyon while at the Boston University Undergraduate Program in Neuroscience | One thing I have always struggled with in reading philosophy is the doctrine of Innatism, which holds that the human mind is born with ideas or knowledge. This belief, put forth most notably by Plato as his Theory of Forms and later by Descartes in his Meditations, is currently gaining neuroscientific evidence that could validate the belief that we are born with innate knowledge of our world (Left to right: Plato, Kant, Nietzsche, Buddha, Confucius, Averroes).

The predominant belief and assumption about human learning and memory is that we are born as a “blank slate,” and we gain our knowledge and ideas through new experiences and our memory of them. This belief is known as Empiricism and, although dates back to Aristotle, has been supported by many famous philosophers such as John Locke and Francis Bacon. However, a study published in last March’s Proceedings of the National Academy of the Sciences (PNAS) may, to an extent, discredit this main theory of knowledge collection. The research, conducted by the Blue Brain Group in Switzerland, explored the remarkable similarities in the neuronal circuitry in the neocortices of all brains. The study, summarized in this article in PNAS, essentially “discovered a synaptic organizing principle that groups neurons in a manner that is common across animals and hence, independent of individual experiences.” This discovery may have huge implications on our understanding of learning, memory, and development. The groups of neurons, or cell assemblies, appear consistently in the Neocortices of animals and are essentially cellular “building blocks”.

In many animals then, it may hold true that learning, perception, and memory are a result of putting these pieces together rather than forming new cell assemblies. According to Dr. Markram, “This could explain why we all share similar perceptions of physical reality, while our memories reflect our individual experience.” This is a remarkable example of the ways in which neuroscience and its research is revolutionizing our understanding of the ways in which we come to know and perceive our universe, while simultaneously answering major philosophical questions. While these findings may go against the incredibly popular empirical view of knowledge, they lend themselves very well to the notion of innate ideas. Plato and Descartes used this general theory to explain human reasoning. Plato believed that the human soul exists eternally, and exists in a “world of forms (or ideas)” before life; all learning is the process of remembering “shadows” of these forms here on Earth. While this idea is still a little out there for me at least (and it may take a little more scientific evidence to support that claim), Descartes’ claims seem very consistent with the Blue Brain Group’s findings.

Descartes proposed that the inborn ideas that we possess are those of geometric truths and all of our intelligence can be accessed through reason. Discussing ideas in his fifth meditation, he states “We come to know them by the power of our own native intelligence, without any sensory experience. All geometrical truths are of this sort — not just the most obvious ones, but all the others, however abstruse they may appear.” Another study supporting this notion is the result of research on “intuitive physics,” or the seeming understanding we possess of the physical behavior of objects in our universe without even thinking about it. In an article summarizing the study, Janese Silvey provides the example that “if a glass of milk falls off a table, a person will try to catch the cup but not the liquid spilling out. That person is reacting rather than consciously thinking about what to do.” The report on the actual experiment, by Susan Hespos and Kristy vanMarle, showed that infants possess expectations that, for example, objects still exist when they are hidden, and are surprised when these expectations are not met (surprise was indicated in the study by a longer looking time). Other experiments were conducted to demonstrate the understanding that infants from 2-5 months old have of cohesive properties, solidity of materials, and other basic physical characteristics of objects. The full report of the findings can be found here.

For me, the best news that comes out of this is that these new findings compromise both the philosophical doctrines of innatism and empiricism, opening up new discussions of exactly what knowledge and learning mean.

Markram’s Study on Synaptic Organization-PNAS

Physics for Infants-WIREs Cognitive Science

Descartes’ Theories of Innate Ideas-Stanford Encyclopedia of Philosophy

Plato’s Theory of Forms and Thoughts on Innate Ideas-Stanford Encyclopedia of Philosophy

Infants Understand More Than Thought-Columbia Daily Tribune

New Evidence for Innate Ideas-Blue Brain Group



How 5G will Enable the Future

One may expect 5G to be like a cloud of connectivity that follows you everywhere; for example, from your home to your autonomous vehicle which drives you through your “smart” city to your “smart” and “secure” office.
5G (5th generation mobile networks or 5th generation wireless systems) is the next generation of super-fast and secure mobile telecommunications standards. 5G has speeds beyond the current 4G/IMT-Advanced standards. The 5G mobile telecommunications standard will usher a unifying connectivity fabric with huge enhancements to broadband experience everywhere and anytime. 5G will also allow us to seamlessly connect embedded sensors in virtually everything. The 5G concepts such as millimeter wave (mmWave) spectrum will provide internet access to homes using wireless network technology rather than fixed lines. Other 5G use cases include the production of an ultra-high-fidelity media experience, and ultra-reliable/available low-latency links, such as remote control of critical infrastructure which are all slated to come out in 2020.

The spectrum allocations, Request for Proposal (RFP) and Request for Quotation (RFQ) process have already begun in the US. And the US appear to be leading in mmWave deployment. To prepare for a 5G world, 3GPP, the international wireless standards body, completed the 5G technical specifications that allows chip and hardware makers to start development.

Sanjay Jha, at the IEEE 5G Santa Clara World Forum this July, shows that one may expect 5G to be like a cloud of connectivity that follows you everywhere; for example, from your home to your autonomous vehicle which drives you through your “smart” city to your “smart” and “secure” office. There will be explosion of real-time acquisition and manipulation of images at every aspect of our lives in government, retail, healthcare, education, and entertainment. The explosion in mobile display resolution and real-time AI applications will make augmented reality (AR), virtual reality (VR), mixed-reality (MR) (or hybrid reality), seamless; merging real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.

At the next DC5G in Washington DC this November, one would expect to hear and see live demo of the disruptive World of 5G IoT-Powered VR, AR, and MR applications in booth after booth demonstrating how this super-fast connectivity would change how we connect online and make our life’s easier. All the wireless technology providers are positioning themselves for this massive mobile technological revolution. Some of them plan to roll out limited 5G networks in the Sacramento area, California, this year with peak speeds of up to 1 Gbps. Which is “a big deal” considering future 8K video streaming would require no more than 100 Mbps. What will make 5G unique will not just be the super-fast speed but also the ability to securely connect a whole lot of devices without wires. Which means most home users may have no need for wired internet services since wireless providers speed will rival wired connections.



Understanding Quantum Computing

One suggested approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates (images: Wikipedia).
Quantum computing although still in its infancy is quantum bits that uses a very different form of data handling to perform calculations. In short, it’s a quantum-mechanical phenomena computing. The emergence of quantum computing is based on a new kind of data unit that is non-binary, as it has more than two definite states (0 or 1). Quantum computation uses quantum bits or qubits that can be in superpositions and entanglements states.

Unlike classical computer that works on bits of data that are binary, quantum computer, maintains a sequence of qubits, which can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. According to scientists, qubits are based on physical atoms and molecular structures. However, many find it helpful to theorize a qubit as a binary data unit with superposition.

To bring to light the importance of quantum computing, many national governments and military agencies are funding quantum computing research on top of effort to develop quantum computers for civilian, business, trade, environmental and national security purposes, such as cryptanalysis. John Preskill introduced the term quantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field. Roger Schlafly pointed out that the claimed theoretical benefits of quantum computing go beyond the proven theory of quantum mechanics and imply non-standard interpretations, such as multiple worlds and negative probabilities. Schlafly on the other hand maintains that the Born rule is just “metaphysical fluff” and that quantum mechanics doesn’t rely on probability any more than other branches of science but simply calculates the expected observation values.

One of the greatest challenges of quantum computing is controlling or removing quantum decoherence; that is, loss of quantum coherence or means of isolating the system from its environment as interactions with the external world causes the system to decohere. Right now, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.  Meaning time consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.

One suggested approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates. Here are four of several quantum computing models in development:

In February 2018, scientists reported, for the first time, the discovery of a new form of light, which may involve polaritons, that could be useful in the development of quantum computers. while in March 2018, Google Quantum AI Lab announced a 72 qubit processor called Bristlecone. IBM Research announced eight quantum computing startups joined the IBM Q Network, including: Zapata Computing, Strangeworks, QxBranch, Quantum Benchmark, QC Ware, Q-CTRL, Cambridge Quantum Computing, and 1QBit in April 2018.

Quantum computers are really good at solving those problems where you’ve got an exponential number of permutations to try out, said Stanford Clark. However, quantum computers will never be able to run the type of logic that we’re familiar with in the classical computer architecture, said Andy Stanford Clark. Although, quantum computers may be faster than classical computers for some problem types, A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem.