Programs for Programming

Everyone remembers having to learn cursive in grade school; it was a faster, fancier way to write than print. Keyboarding was never required in 1990’s public education, but I was fortunate enough to pick it up as an optional elective in middle school. Despite this, I can’t help but note how much more often I use the latter over the former in my everyday life. Is it just because I am a computer scientist? Or is it a sign of the changing times? I don’t claim to know what’s best for today’s grade school curriculum, but I’m sure most people would agree that these priorities should be changed if they haven’t been already.

What about programming then? Historically, keyboarding and programming were both skillsets that only the technologically inclined had to know. But today, toddlers swipe away at their smartphones while their grandparents continue to hunt-and-peck away at old keyboards. Today in the twenty-first century you don’t have to be inclined to participate in technology – technology comes to you! As a result of this, there is a modern day program to introduce more people to, well, programming. CS4All, or Computer Science for All, is a new education movement that seeks to “provide equity, empowerment, and opportunities that maximize the innate potential of every student”. Previously, low income families rarely had the opportunity to expose their children to computing careers, and CS4All aims to spark that interest in as many students as early as they can.

But where exactly should computer science fit into a typical K-12 curriculum? In my own experience fundamentals of computing was available as early as grade school and keyboarding was an elective in middle school. Actual programming did not come until high school (in my case, Java), and even then that might have only been because I specifically attended a technology academy. And while I agree with the overall goals of CS4All, it’s important to recognize that not everyone has the drive or want to end up in the computer science field. This makes it a bit similar to math in that, although everyone is required to study it to a certain degree, the extent to which most people actually use it on a daily basis is limited to a very basic level. If you think about the practicality of it, among everyone who begins studying math in grade school those who go on to earn a mathematician’s degree are few and far between.

Some would argue that having to learn programming in order to use computers would be the same as having to learn automobile mechanics in order to drive a car. I’m inclined to agree. As any computer scientist can tell you, abstraction is a very important aspect of programming. Abstraction is the concept of providing users with a simple interface that does not reveal any of its underlying complexities. In today’s society many people of all professions use computers without a full understanding of what goes on under the hood. CS4All’s aim is to remove a layer of abstraction regarding computer science by giving more people the means to not just use technology, but code and create it through programming. The problem is, making this type of education mandatory would create an unnecessary expense in the form of students spending time studying programming with no intention of entering the field in the first place!

This doesn’t mean to say that I don’t believe in exposing children to programming at an earlier age. In fact, I’m all for it! I just believe that what we should guarantee is the opportunity for students and their families to opt into such a computer program without having to pay any unreasonable, additional expenses, not making it mandatory for everyone. This way, schools only have to pay as much as the community demands and the option is always open for children to dip a toe into the pool of computer science and see if they like it.

Trollolol

Trolling. It’s a modern term most baby boomers wouldn’t even recognize, but nonetheless is very close to the heart of every generation afterward. Yes, close to the heart and clogging the aorta. Trolling is a verb that became popular after the rise of the Internet and refers to someone who acts to deliberately offend others and start arguments. While this general definition has found some informal usage outside of computers and in the real world, its origins and use has spread much deeper and farther for decades within Internet culture. People who actively participate in trolling are negatively labeled “trolls”, and are often looked down upon by the rest of the Internet community as being immature and selfish. Trolls are assumed to find some twisted form of entertainment in their harassing and upsetting of others, leading to the spread of the popular Internet phrase: “don’t feed the troll”. Essentially, this practice boils down to ignoring trolls completely to deny them any satisfaction from their own work. While some attribute the destructive behavior of trolls to some sort of real-life escapism, others doubt the validity of this victim card. It may be that the Internet simply acts as a convenient outlet for a troll’s already sadistic tendencies, not just as a form of escape.

If one thinks about it, trolling is just a kind of harassment through a digital medium. By that logic, companies have just as much (or as little, depending on your personal opinion) responsibility to prevent and respond to online trolling as workplace harassment. Think about it: physical stalking and harassment is not an ignorable offense and cannot be publicly tolerated by any self-respecting company. Online stalking and harassment, however, is treated very differently. Facebook and other online social services provide means to communicate between users, but they are extremely limited in their power to filter the actual content of messages. Infringing upon people’s First Amendment right to free speech isn’t taken lightly, even on the Internet, and because the term “trolling” was originally coined in a casual setting, people often undermine its causes and effects. Nowadays trolling has even gained the reputation of ultimately harmless online harassment. While this is fine by itself, it’s important to make clear that not every form of online harassment can be waved off as trolling. More serious forms (see: online bullying) are very real with very real consequences.

In my personal opinion, anonymity on the Internet is a good thing. While there will always be groups of people that will abuse anonymity to take advantage of others, I have faith that the majority of the world is comprised of good-willed people (if it weren’t, society would fall apart as we know it). Anonymity allows this larger group of good-willed people to express themselves without fear of being judged, and in my opinion the creative repository this gives man/womankind far outweighs the potential damage of anonymity abusers. While “real name” policies have always been toyed and experimented with to prevent this, the reality is there is still no practical way to enforce such strict rules. There is even less of a way to make sure abusers can’t still find a way to be anonymous in the first place. Until law catches up with technology (and it may never will), maintaining “real name” policies seems almost impossible at the moment.

Personally, I agree that the best way to deal with trolls is to ignore them. Any sort of interaction with trolls, whether to correct them or reason with them, will most likely end in a one-sided argument and frustrated feelings. The best advice I can give to others is to develop a thick skin through their experiences online, and to help people understand that trolls only have as much power as those around them are willing to give.

Deep Dive

Artificial intelligence is intelligent, human-like behavior exhibited by machines and software. While it used to be the subject of many popular twentieth century science fiction movies, now it seems like only a few decades from becoming an actual reality. Along with the word ‘intelligence’, AI was developed to not only mimic the human brain but also its natural structure. Artificial neural networks (ANNs) were inspired by their biological namesake, and are one of the many similarities AI and human intelligence share. ANNs play a vital role in machine learning and cognitive sciences through deep learning, which gives machines the “ability to learn without being explicitly programmed”. But while this may seem akin to human intelligence, there is a specific difference that makes it distinct: decision-making by AI today is bound by a single directive. Humans function much differently and are always constantly comparing and re-prioritizing goals in their everyday lives. This is not to mention the fact that we often create goals on our own. It’s obvious that there is still much work that needs to be done to bridge the gap between AI and human intelligence.

That’s not to say current advancement in AI is not stunning. AlphaGo, Deep Blue, and Watson are all well-known examples of artificial intelligence technology. Although each was originally designed for a specific, singular purpose (playing Go, Chess, and the game show Jeopardy! respectively) they demonstrated great mastery by handily defeating human professionals in their respective fields. While this may seem like passing entertainment to some, I’m sure fellow engineers have no problem imagining their use in other, more generalized commercial applications. In fact, derivatives of Watson are already currently ‘employed’ by the Memorial Sloan Kettering Cancer Center in New York as decision aides for lung cancer patients. In addition to that, AlphaGo’s triumph over 9-dan Go professional Lee Sedol just last March convinced the South Korean government to invest $863 million in AI research over the next 5 years! That should silence any AI viability naysayers.

But when can we definitively say that AI has reached the complexity of human intelligence? Alan Turing established a test in 1950 to measure a machine’s capacity to ‘behave intelligently like a human being’. The standard version involves a human evaluator’s ability to distinguish between another human and AI based solely on their responses. “How closely can the AI pass for being the human?” is the final question posed by this Turing test. Today, this test comes at odds with John Searle’s Chinese Room argument, which states that regardless of how intelligently a machine behaves, its program cannot give it a human “mind”, “understanding”, or “consciousness”. To visualize his point, he describes a locked room with a non-Chinese speaking person inside. If that room is filled with “to-Chinese translation books”, the non-Chinese speaker would be able to carry a fluent written conversation in Chinese with any Chinese-speaking person outside. In essence, any Chinese-speaking outsiders could be fooled into thinking the room resident knew Chinese when in they were simply following written translation instructions. In actuality, there was never any understanding of the Chinese characters at all!

I am of the belief that, with the machines we use today, artificial intelligence cannot hope to achieve a perfect human “mind”. While the end result of machine behavior and actions may mimic ours, I sincerely doubt that the reasoning behind them would be anything but the end result of massive computation. It’s difficult to imagine machines today acting out of more complex human emotions like regret, appreciation, compassion, or love. In that sense, I guess I would agree with Searle’s Chinese Room argument regarding today’s computers. But that’s not to say I don’t think a perfect AI can never exist. I just believe that our interpretation of modern computing would have to change drastically in order to make it happen.