An Impartial Internet

Net neutrality is a much debated topic when discussing the future of the Internet. But what exactly is net neutrality? The controversy outdates its namesake, but the term was first coined by Colombia Law School professor Tim Wu and refers to the principle of an unbiased Internet. In simple terms, the principle states that Internet service providers (ISPs) such as Comcast, AT&T, Verizon, and Time Warner Cable should treat all data on the Internet equally; this means they cannot discriminate or charge differently based on user, content, site, platform, or even application. In essence, ISPs are expected to “provide the pipes, but should have no say in what passes through them.” This is currently the way the Internet works, but several ISPs are pushing for what is called a “closed Internet” in which they can regulate traffic and potentially filter content. Advocates point out that this gives providers too much power outside of their jurisdiction while ISPs counterclaim that data discrimination actually guarantees quality of service.

There are many freedoms to an open Internet that most people enjoy without realizing: lack of restrictions means anyone can access (almost) any part of the Internet with the guarantee that no one is given special treatment over anyone else. This means that anyone e-mailing, file sharing, instant messaging (IM), or video conferencing from the comfort of their own home receives the same treatment as a large organization paying more for the same service. ISPs are attempting to change this status quo by legislating “Internet fast lanes” in which they can offer content providers faster service that distinguishes them from individuals. Neutrality supporters counter with the argument that leaving transfer rates to the discretion of the ISPs will only create an environment that heavily favors corporate alliances and destroys free market capitalism. Who is right? Who is in the wrong? It is easy to slap a company agenda onto the motivations of the ISPs, but it may be important to consider the nature of the Internet itself in order to draw more concrete conclusions.

Modern day Internet behaves similarly to road traffic, so much so that the word traffic today is just as likely to refer to web traffic as it is street traffic. The accuracy of the analogy is very well deserved. Many people use roads for many different purposes but everyone experiences the same service without prejudice, regardless of whether they are a taxi conducting business or a family on a relaxing cross country trip (with the exception of some emergency services). Continuing this metaphor, implementing the “fast lane” proposal by ISPs is the functional equivalent of the government granting paving companies the power to designate express lanes as well as collect fees for their usage. After all, paving companies lay down the foundations for the road systems just as much as service providers lay the foundations to access the Internet. But something already seems out of place; we pay taxes to the government for the construction of roads, not their usage. But that’s not all. Lack of road neutrality, a “closed road”, would also allow paving companies the power to deny access to places they do not want people to visit. This not only infringes on the rights of drivers, but it also makes it difficult for business to communicate directly to consumers by inserting a middle man that, arguably, doesn’t need to exist. In addition, designation of fast lanes for businesses means fewer resources to serve those who travel in the slower lanes, whose users number far greater. Although ISPs claim that discrimination of data ensures quality of service, the larger issue here seems to be that Internet providing does not seem to be a service type that warrants continued management past its provision; this becomes apparent when the service is translated into a real life system like road traffic. The freedom to use and drive on roads seems to be a right expected just as much as an impartial Internet.

Project 03: Whistleblowing, Security, Privacy

Submission: Project 3

People have a right to manual encryption in the same way that they have a right to wear a Kevlar vest out the door each morning ; they may do it as many times as they want, though many people would probably come to question the tradeoff they are making between security versus time.

The right to privacy is never explicitly stated in the Constitution, but it is alluded to in the fourth amendment through “the right to be secure in [our own] persons, houses, papers and effects.” In the twenty-first century, encryption is the most secure way for us to achieve digital privacy, and that makes it a fundamental right. Claiming that our right to encryption does more harm than good by locking out the government is similar to claiming that our right against unreasonable search and seizure does more harm than good by locking out law enforcement. The only notable difference between the two is that one is physical while the other is digital. While physical warrants are certainly acceptable in many cases to apprehend a suspect, it is only socially acceptable once the need to infringe upon the right is proven beyond a doubt. To make a proper transition from physical to digital, circumventing encryption by ordering a workaround would have to be proven beyond a doubt by a similar kind of ‘digital’ warrant. Even then, the process of developing a workaround would have to be kept secret within a contained environment, in order to prevent any knowledge or code from leaking to the outside world.

Many people do not consciously think about the workings of encryption in their day to day lives, though they probably use it without fail on a regular basis. This is very similar to how nearly everyone drives, yet only a small percentage of those that do are actually aware of the intricacies of how a car works. Is it because of indifference? Efficiency? Or maybe it’s out of ignorance? Regardless of the reason, abstraction of complexity is an important part of life in the twenty-first century. As a computer scientist, encryption is very important to me because I am aware of how it works and what it does for us. Is it important enough for me to take a political stance based on the issue? Certainly. Would I force my agenda onto the less technologically inclined? Probably not. I would openly give my knowledge or opinion of the issue to them, but I am of the mind that everyone should form their own educated opinion before contributing themselves wholly to an issue. I would feel much better about the future of security if those with more to say on the issue had the opportunity to speak with a louder voice.

I don’t really see the issue as personal privacy versus national security. No one would argue that either one of those is a bad thing. Instead, I view it as an opportunity to set guidelines as to how the two of them will interact in a way they’ve never interacted before. In that sense, I’m placing my effort and faith in a future where we can guarantee both privacy and security at once.

21st Century Pirates

The Digital Millennium Copyright Act (DMCA) is a controversial piece of U.S. legislation passed in 1998. It was intended to update copyright law to accommodate the Internet, and criminalizes any technologies, devices, or services meant to circumvent copyright measures (with the exception of security-related tasks and encryption research).

DCMA Title II, the Online Copyright Infringement Liability Limitation Act (OCILLA), specifically defines a safe-harbor for Internet Service Providers (ISPs): as long as ISPs meet several provisions outlined in the DCMA, they will not be accountable for the actions of their clients. In other words, ISPs will not be held responsible if one of their subscribers commits copyright infringement (though that client’s service must be canceled). In exchange, copyright holders have the right to subpoena an ISP for the identification of an alleged copyright infringer.

In the twenty-first century the misuse of copyrighted material is an almost daily occurrence. Take the most common case for example: the illegal downloading of music has proliferated so much that it is a wonder how the record industry even manages to stay afloat (probably through advertisements and live performances). From a physical standpoint, it may not seem like that much of a problem for users to download or share copyrighted material. After all, twenty years ago no one complained if someone lent a friend a DVD for a weekend, right? This twentieth century mindset may be the reasoning as to why many people engage in the explicitly illegal behavior; the consequences of copyright violation are just not apparent from the click of a single mouse button. “I’ve already paid for the product anyway, right?” This line of thinking makes digital copyright laws seem needlessly strict. But is this physical analogy really all that accurate? One important distinction to note is that breaking copyright laws usually means having the copyrighted product permanently. In this sense, it is less like lending a friend a DVD for a weekend and more like burning the DVD and giving it to them to keep!

Things get a little grayer when considering other subtle factors. If a product exists in several different formats, should it be considered several different products? Or should users that have a single version of a product also have the right to access it in various forms? What if users are only “sampling” the material? The answer to these questions don’t make a perfect transition from physical to digital, and it may be dangerous to try and address all of them at once with a blanket law that attempts to cover all possible cases. Ideally, I believe that the law should make clear that digital properties are their own form of media instead of trying to modify the laws for physical copyright. To follow up, they should create an organized system for interpreting each creator’s copyright terms. Of course always considering each case individually would cost a lot of time and resources, so this may not be the most practical approach. General guidelines should govern a majority of the procedure, but I still believe there should be room to interpret a single case free from the guidelines if the need arises.

Even today, new technologies are adding more angles into the piracy problem. Streaming services like Netflix and Spotify, which did not exist ten years ago, have addressed some problems with piracy. Cloud computing has given users a sense of digital property and a place of permanence on the Internet, but the problem of getting rid of piracy completely is less of a technology issue than it is a social one.

Patent Pending

Patents are granted by the United States Patent and Trademark Office and give temporary, exclusive property rights relating to an invention. There are three different types of patents one can file for: plant, utility, and design. Plant patents, as they suggest, protect hybrid or new vegetation and last 20 years. Utility patents are for new and unique ideas while design patents are for redesigns of already existing ones. They last 20 and 14 years, respectively, and their infringement can be enforced through civil lawsuits. In exchange for this service, the government asks for a definition of the invention and full disclosure of its contents from the inventor. Once the patent expires the information is released into the public domain for anyone to legally copy, reuse, or market. As of the year 2000, the cost of obtaining a patent in the United States ranges anywhere from $10,000 to $30,000.

At a conceptual level, patents are meant to simultaneously protect ownership of an idea and spread knowledge of its existence. As outlined in the original U.S. Constitution: for “the progress of science and the useful arts” Congress has the power to “grant [inventors] exclusive right to their respective writings and discoveries”. Ideally, this was meant to fuel innovation by way of assurance. Why bother inventing something new if anyone could just take your idea and sell it as their own? It seems like the only sensible thing to do. The 20 year time limit also performed double duty by stopping monopolies from forming as well as preventing potentially good ideas from dying out. What, then, could be the problem?

Funnily enough, in the real world ideas don’t work out as soundly as they do on paper. Laws must always allow themselves to make room for the times, and patent laws are no exception. Not every inventor wants to deal with the red tape of the business world, and many patent owners often find themselves selling the right to their creations to others. Theoretically, this should work out fine: even if someone doesn’t have the skills or patience to deal with business they can at least hand off their idea to someone else and make a pretty penny at the same time. But the reality is different. Instead this created an environment in which new ideas could only thrive in the hands of big business – big business ready to sue anyone who even thinks twice about infringing on their hundreds of well-bought patents.

As with many things in life, I refuse to see the problems with patents as a one-or-the-other and no in-between type scenario. I don’t think anyone can argue against the original good intention behind patents as stated in the Constitution, but I also don’t think anyone can argue that the current system is perfect either. The intentional misuse of patents by “patent trolls” is proof that there is still room for improvement. I think it’s time for a change. Three types of patents may have been all-encompassing in the twentieth century, but it seems nigh impossible to try and classify new ideas in 2016 into just those three categories. Intellectual property today should not only be recognized as physical things, and software and the like should have their own type of patent with rules catered specifically to that audience. Eventually the new patent system should have the long-term goals of 1) being able to accommodate new types of patents, 2) discouraging patent misuse (perhaps by way of increased penalties). and 3) allowing orderly revision of the system itself.

Cloud Coverage

To many non-computer savvy people, the cloud is a complete mystery. What exactly is the cloud? What is it used for? Many people know the cloud is important, but not exactly what it is or what it does. To put it in technical terms, a cloud is a virtual resource pool that has an interface that (if it is well designed) hides any underlying complexities. In other words, it’s an Internet-based network that provides shared processing resources to its users. Utilizing a cloud means storing, managing, and processing data on it without caring exactly where the data itself is physically stored among the cloud’s many machines.

For developers, cloud computing provides a convenient way to store data without having to worry about any in-house costs, organization, or dedicating employees to its upkeep and management. For them, it’s in their best interests to let outside services such as Amazon EC2 (Elastic Compute Cloud) and Google App Engine take care of the specific details so that company resources can focus on other production factors. Programmers specifically only have to worry about interacting with the cloud system through its API in order to make full use of the data they send, store, process, and retrieve from it.

For consumers, the cloud means long term durability in the storage of their data. Many cloud systems keep replicas of files within their systems in case of individual machine or large scale power failures, so users can rest assured that (at the very least) the persistence of their stored files is reliably guaranteed. For this reason, cloud computing is often used for massive data generating services like Facebook, YouTube, and Twitter where every consumer is a producer of content. And thanks to the cloud, this data can be accessed quickly by multiple devices on demand.

But technology has always evolved one step ahead of security and regulation. Just look at the Internet; even though it has been in use for nearly 30 years, the waters of Internet control are still being tested by controversial concepts like SOPA (Stop Online Piracy Act) and net neutrality. Cloud computing is no different. Security and privacy risks are the prices consumers pay for using cloud computing services. Content stored on the cloud is very vulnerable to attacks and leaks, as was demonstrated by the high profile iCloud leak in 2014 where almost 500 private pictures of various celebrities were exposed to the public. This tradeoff decision between convenience and privacy is often a choice that most users of social media systems don’t consciously make.

I am currently taking CSE 40822/60822: Cloud Computing under Dr. Thain. Through coursework I have had experiences coding for Condor, a distributed computing system, at Notre Dame as well as a Work Queue master/worker framework. Right now I am currently working with Hadoop and Map-Reduce algorithms while learning Pig Latin and HBase. The processing power granted by using a cloud system speeds up performance tremendously on a scale that can only be appreciated by experiencing it with large jobs. For example, if rendering a one minute video on a single machine at 10 frames per second using POV-Ray takes 7 hours and 20 minutes, utilizing the Condor cloud could potentially break this total time down to only 32 minutes.

I have mentioned several times on this blog that I hope to one day begin my own startup company. Building an infrastructure that is capable of scaling pretty much requires knowledge of cloud computing concepts and their implementation, so I can see myself making much practical use of this knowledge in the future. Maybe cloudy skies in the forecast aren’t such a bad thing after all.