![]() I believe that Notre Dame matches up with the ACM guidelines relatively well. In my career at Notre Dame so far, I have taken classes in about three quarters of the above knowledge areas, and I know that the other quarter are offered as electives. It would certainly be possible to graduate from Notre Dame without hitting every knowledge area in this chart, however, and I will probably do that with Parallel and Distributed Computing, Software Development Fundamentals, and Information Management just because of how my scheduling worked out with going abroad and not wanting to overload on classes any semester. With all of the College of Engineering math and science courses we have to take and the university requirements, it just wouldn’t be possible. I personally am fine with this because I think I would’ve gone insane if I was taking 15 credit hours of only technical classes every semester, but still, I will graduate with less knowledge in some areas than someone with a curriculum that required those courses. For the ABET guidelines, I think we match up much better. It says a computer science education must include “Coverage of the fundamentals of algorithms, data structures, software design, concepts of programming languages and computer organization and architecture. [CS]” These are all required at Notre Dame (assuming programming languages is similar to paradigms) with the exception of software design which is still an elective. We complete advanced coursework, learn a variety of languages, and have to take math and science classes. Looking at those guidelines, I really like the ABET approach. I think taking advanced math and science courses is good for the first two years of college because it’s practice in thinking in a logical way and once you can do that classes like statistics are relatively easy. A problem I see with the ACM guidelines is that for that to be possible you would be taking only strict computer science classes. I would rather talk about books or history than computer science, and I’m glad I got the opportunity to take classes about those things during my time here without having to worry about graduating on time. That being said, I do think the first year of studies is a little silly. We should be able to choose if we want an only technical education or a more well rounded education because we’re the ones paying for the degree. I do not think it would have been better if I went to bootcamp out of high school or gotten a degree in another major and then went to a bootcamp. Every single job I applied to asked what college I went to and most asked my GPA. If the point of college is to get a job, then I 100% believe my education was worthwhile because my impending degree from Notre Dame was a big part in me getting a job. If the point of college is to deepen your knowledge, then I also believe Notre Dame was 100% worth it because I took classes in a huge range of subject and know much more than I knew when I arrived Freshman year. Obama once said, “It turns out it doesn’t matter where you learned code, it just matters how good you are at writing code.” This is only true if you’re Bill Gates or you have a much lower desired salary. The average person would be better going to college. That being said, I think bootcamps are a good idea especially for people who may not be in a position in life to spend four years in college. If someone can learn the skills necessary to do a job, then they obviously will be a good candidate for that job. Once again, though, I don’t believe the education they get is equivalent to a college degree. I do not believe you need college to be a good computer scientist, computer engineer, software developer, or programmer. I think if you were self motivated enough you could take Stanford’s free online classes and get an equivalent education to the one I received. That being said, a degree is worth something, and correct or not, employers see it as an extra hoop you had to jump through that shows you’d be good at the job. I think Notre Dame has done a great job of giving me a baseline education, but I do not feel 100% prepared for my future career. I have never taken a software engineering class. When I was doing my internship last summer, there were a lot of things I had to learn about writing enterprise software that I had never learned in school, and I’m sure that will also be the case when I start my full time job. This is not at all unique to Computer Science, though. My parents are lawyers and in law school they learned about the law, not the ins and outs of beings a lawyer. They learned that on the job just like I’ll learn the ins and outs of being a software engineer on the job.
0 Comments
Patents are assurances by the government to an inventor that in exchange for public use of the inventor’s product, for twenty years the inventor has a complete monopoly on the technology. During this period, no one else can make, sell, or import it. It’s easy to see why the idea of patents was initially created. Intuitively, people value intellectual property just like they value physical property. If Apple spends a ton of time and money researching facial recognition technology and people decide to purchase this product as a direct result of this technology, Google should not be able to use the exact same technology in their product without having to make the research investment Apple did. It wouldn’t be fair and would make people less likely to develop new technologies because they would be stolen anyway and there would be no market advantage. With the use of patents, Apple now owns the creation they invested in, and if someone wants to use the same technology, they need to do their own research on how to recreate it in a fundamentally different way. In a perfect world, these laws are a great way to protect innovation. I believe the use of patents in this manner, their intended manner, is both ethical and moral because it rewards creativity and punishes theft.
In practice, the use of patents is much more complicated. If everyone used patents as they were intended, this wouldn’t even be a conversation, but that is not the case. A good example of this can be found with a Tesla decision from 2014. Elon Musk wrote, “Tesla Motors was created to accelerate the advent of sustainable transport. If we clear a path to the creation of compelling electric vehicles, but then lay intellectual property landmines behind us to inhibit others, we are acting in a manner contrary to that goal. Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” He went on to sat that, “When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors.” This demonstrates the basic problem with patents in use. They don’t protect innovation they just give bigger companies a method to crush smaller companies that can’t keep up with court fees. Musk even said he still needs to get patents, even though they don’t care if people use their technology, because if they don’t, someone will file their own patent on Tesla’s technology and sue them. This is clearly hindering innovation. Also patent’s generally go against the free market. As Musk said, the logical solution to make electric vehicles more mainstream is to make the market more competitive not less. Startups will not be able to do the legwork Tesla did for electric cars, but they could use that work as a jumping off point to make the product even better than Musk could imagine. More buying options is always better for the consumer. In that sense, I believe patents hinder innovation more than help. If we are going to grant patents on physical artifacts then I think we also have to give software the same protection. The article, “THE HISTORY OF SOFTWARE PATENTS: FROM BENSON, FLOOK, AND DIEHR TO BILSKI AND MAYO V. PROMETHEUS” outlines the major court cases involving patentable software. From software not being considered patentable in the 60’s and 70’s, to all software being patentable by the 90’s, and finally to the more nuanced situational allowance we enjoy today. These situational rules include “This test holds that a process is patentable if "(1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing" from the Bilski decision. This seems like a very fair rule for determining if a particular software applies. It recognizes that software is created as a result of research and innovation just like physical artefacts, but also makes sure that the process is appropriate to be patented. The existence of patent trolls is absolutely evidence that the patent system is broken. This is lawyers gaming the system and hampering innovation. I would not be a good lawyer because I believe people and corporations should act decently. Just because something is technically legal does not mean it is not harmful. The article “The Menace and the Promise of Autonomous Vehicles” by Jacob Silverman outlines arguments for and against self-driving cars. On one hand, it would be a huge boost to the economy. Instead of paying a fleet of truck drivers $50,000 a year to transport goods, companies could have self-driving cars do the same task for only the initial investment of their creation. This will drive down the price of goods that are transported by trucks which will allow more people to buy these goods and thus inject more money into the economy. Even more significantly, Silverman mentions that “AVs promise to eliminate some 35,000 deaths each year, which are blamed on driver error.” This would be a huge step towards making the roads safer for everyone. At the same time, the consequence of bugs in this technology could be death. The CEO of Toyota North America said, “The reality is there will be mistakes along the way. A hundred or five hundred or a thousand people could lose their lives in accidents.” While this may seem like a statistically insignificant number of people, especially given the 35,000 a year that these vehicles will allegedly save, those hundred or five hundred or a thousand people are dead as a direct result of the pursuit of this technology. Another argument against self driving cars is that they would actually harm the economy because they would put millions of Americans out of work. Truck drivers, taxi drivers, and uber drivers would all be out of work, and depending on their age and education level, they may not have skills to enter a new industry. The “social dilemma of autonomous vehicles” should not be addressed by the programmers or even the corporations creating the cars. There should be government policies put in place by our elected officials that mirror the majority’s opinion on self driving cars. I don’t want Joe from Uber making a decision about in what scenario a self driving car is allowed to kill me. If they do, I think they should be held personally liable for any death that results. In my opinion, autonomous vehicles will eventually be good enough that this dilemma will be more of a theoretical one than a practical one and until then, they should not be on the road. For example, in the case of the homeless woman who was jaywalking at night and hit by a self-driving car, it’s feasible to imagine a car that slows down if it senses an object a certain distance in front of it. Also, if the car is going to be out at night, this sensor should be just as good as the one that works during the day. If an accident happens and the car followed the agreed upon law on how to respond, then I think the party who broke the law (presumably the non autonomous driver) should be held responsible. For instance, if the car broke quickly to not hit a pedestrian and it was being tailgated by a human driver who in turn rear ended it, then the human driver would be responsible. Another example would be if it hit a pedestrian that was jaywalking, the pedestrian would be the one at fault. Economically, self driving cars will make goods cheaper, but dramatically increase the unemployment rate. I think this will be net bad for the economy because more people will be living below the poverty line thus not spending money, and people above the poverty line will be taxed higher to aid those below so they will also be spending less. Socially, they make us enumerate hard issues about saving lives, but also they will once again undoubtedly net save lives because from what I’ve read, the major accidents have all been started by human error rather than car error. Politically, laws would have to be made to regulate self-driving cars. I could see their allowance being a big partisan issue if trucking lobbyists got involved. I would love a self driving car because then I wouldn’t have to waste part of my day driving. If you commute half an hour to work everyday, you could do an hour of work on your commutes and then be in the office an hour less. You’re getting the same amount done, but have a whole hour of your day back. More than me having one, I would love if everyone else on the road had one because then I wouldn’t have to worry about someone hitting me. From the linked wikipedia page in the readings, artificial intelligence is “intelligence demonstrated by machines.” Intelligence here is an umbrella term that encapsulates human reasoning methods like weighing pros and cons, learning, and problem solving. The only difference between the theoretical ideal of artificial intelligence and actual human intelligence is the agent being a machine vs a human. Artificial intelligence today hasn’t reached this theoretical ideal, but there have been interesting breakthroughs.
IBM’s Deep Blue was one of the first milestones in AI development. In 1996, it defeated the current world champion chess player in a chess game. This definitely was not actual artificial intelligence-- it was literally just a brute force search algorithm. A computer could also bubble sort a list of ten million elements faster than the best human list sorter in the world, but that does not mean it has human level intelligence. Much Later, IBM tried again with Watson-- a system that could answer questions asked in natural language. In 2011, Watson won first place playing jeopardy against two previous champions of the show. I don’t see this as actual intelligence. It “understood” the questions through a statistical model based on a giant training bank of data. It didn’t learn so much as count and divide occurrences and apply theorems humans discovered. Obviously this occured on a giant scale, and it was a huge technical feat, but I would not say that in this process Watson truly understood the questions or his answers, he just changed the form of various inputs and outputs based on the statistical models referenced above. Things have definitely gotten more interesting recently with AlphaGo beating a professional Go player in 2015 and AlphaZero which is even better than AlphaGo at Go but has also mastered Chess and Shogi. Of the options discussed so far, AlphaZero is definitely the most tempting to call “actual AI.” It was trained “solely via self-play” just like a human would be, and then neural networks were used without lookup tables. It trained for 9 hours and afterwards beat all of the leading Go software champions (all better than human) which is crazy when you consider how many years it takes for a person to become good at something. I have not read the full research paper about this, but based on what I read this is incredibly promising. I wouldn’t call it a full AI, which I don’t think it’s claiming to be, in the sense of truly understanding, but it’s definitely learning and problem solving and not relying on giant lookup tables to do it. The turing test, a test that sees if machines can pass for human in conversation, is not a valid measure of intelligence in mind because as the Chinese room example points out, it doesn’t matter how complicated of syntactic relationships it’s keeping track of to map input strings to output strings: it still doesn’t truly understand what it’s outputting which a human does. I understand the concerns over the power of artificial intelligence. In the Newsweek article “How Artificial Intelligence and Robots will Radically Transform the Economy,” Kevin Maney brings up the massive unemployment that could results. He explains that “Truck driver is the most common job in the world--3.5 million of them in the U.S. alone.” When we inevitably have driverless cars in the near future, that is 3.5 million people who will be unemployed, and unfortunately these are most likely primarily not people with skilled fallback careers. This could devastate our economy, and I’m unconvinced by the argument that “maintaining the robots will require jobs” because less so and that work will be skilled. This isn’t really artificial intelligence as much as separate technical innovation, though. Most of the examples of replacing low skilled workers with machines also fall into this category. I am more afraid of human beings losing their jobs than I am of sentient robots overthrowing us. I don’t think a computing system could ever have a mind to the scale that a human could. It clearly can have a logical mind that vastly surpasses that of humans, but I’ve never heard of a computer that has emotions or can feel pain or love and other distinctly sentient emotions. Because of these issues, I think the morality of the AI is just the morality of its training data, and it can’t be held responsible for that because it didn’t choose what it was trained on. I would say that computers are mechanical humans before I’d say that humans are biological computers. Humans train computers to help with human tasks, but there are aspects of humans that computers cannot comprehend. There are no aspects of computers at the present moment that at least one human doesn’t understand. Ethically, currently, I believe the creator is responsible for its creation rather than the creation itself. Fake news is basically propaganda masquerading as journalism. In the article, “The top 20 fake news stories outperformed real news at the end of the 2016 campaign,” the author Timothy Lee gave some of the absurd (and false) headlines that beat out factual stories: “Pope Francis shocks the world, endorses Donald Trump for president” and “FBI agent suspected in Hillary email leaks found dead in apartment in murder suicide.” Neither of these are at all true, but they are two of the most interacted with articles on Facebook last year.
My Facebook feed is generally fine, but I still get some fake news when there’s a big political controversy going on (often these days). The last time I remember seeing fake news was during the Kavanaugh senate hearing. A couple of people I went to high school with shared the same post with a picture of Hillary Clinton and an obscured woman behind her that says, “My my, looks what we have here: Guess who’s walking behind Hillary on this picture? Kristine Ford’s lawyer. Another Proof that this is nothing but a Clinton Hit Job” -- 2000 likes, 10,000 shares. Except the photo was taken over two years before Dr. Ford came forward and the woman in the background isn’t even her lawyer. 10,000 people saw that, believed the poorly spelled caption, and shared it without even googling Dr. Ford’s lawyer who looks nothing like the woman in the picture. I am 100% positive that “Fake News” played a huge role in the 2016 election. The most damning evidence is outlined in the New York Times article “13 Russians Indicted as Mueller Reveals Effort to Aid Trump Campaign.” According to the article, “Russians stole the identities of American citizens, posed as political activists and used the flash points of immigration, religion and race to manipulate a campaign in which those issues were already particularly divisive.” Worst of all, they did it successfully. Many, myself included, believe this played a huge role in Trump’s election. James Clapper, the former director of U.S. National Intelligence, has even said “it stretches credulity to think the Russians didn’t turn the election.” This is unbelievably scary, and shows that calling out “Fake News” should not be a partisan issue. Russians knowingly used it to manipulate the results of our presidential election with the goal of destabilizing the country. And it worked. We have to be tougher on this kind of content, but I think it would be a horrible idea for social media platform providers to become “Fake News” police. Looking at just some of the article headlines (“Twitter users are twice as likely to retweet fake news stories than authentic ones”, “Russians Used Reddit and Tumblr to Troll the 2016 Election”, “Donald Trump Won Because of Facebook, etc,) it’s clear that these platforms are being misused and manipulated, but I believe the results of censorship could arguably be worse. Facebook should close accounts that are cases of stolen identities, but if 10,000 people want to share a news story that is false I don’t think Facebook has any responsibility to censor this story. The website that hosts the article may if they want to be held to a certain journalistic standard, but not Facebook. A fake article is not inciting immediate violence, and I am not comfortable with a private entity classifying information as “fake.” I think citizens should get access to as much information as possible and we should decide for ourselves what’s real. If we allow Facebook to decide what’s fake and what’s not they may decide that articles that say bad things about Facebook are fake. When a lot of people get their news from these social media sources, I don’t like the idea of them being able to decide what is worthy of being distributed. I don’t rely on Facebook or twitter at all for my daily news, but I do get a lot of it from Reddit. I definitely live in an echo chamber. The vast majority of my friends are politically similar to me. I would never watch Fox News or read Breitbart because I’ve already decided it’s fake which is probably unfair. This is something that concerns me and I have to actively work to break out of. I like discussing politics with people I know and respect rather than reading inflammatory articles because I find it easier to understand another point of view when its delivered by someone I know is coming from a good place. It’s strange that as we’ve gained access to an almost infinite source of information on the internet, it seems like we know less than ever before. The problem with so much data is it can be spun whatever way is politically convenient. Two politicians can look at the same report on global warming and one will call it fake news and one will call it the signal of the end of the world. I don’t think we live in a post-fact world, but the lines are definitely more blurry than before. According to “The Wired Guide to Net Neutrality” by Klint Finley, “net neutrality is the idea that internet service providers like Comcast and Verizon should treat all content flowing through their cables and cell tower equally.” This means they shouldn’t slow down some people’s data access in order to quicken other people’s.
The arguments against net neutrality are that the things people claim will happen without it did not happen before net neutrality legislation, regulation can’t solve these supposed problems, and that the free market will work itself out. They claim that internet service providers are private companies and the government should not be able to mandate the service they give their customers--especially when the issue is of convenience rather than life or death. According to this argument, if customers don’t like a non-neutral internet service provider, another supplier will rise to the top without gratuitous government intervention. The argument for net neutrality is that without it internet service providers will start bundling the internet. For example, a social media bundle to access facebook and twitter, a media bundle to access youtube and Netflix, etc. Instead of paying for internet, you would be paying for specific sites. There is also a fairness argument where only the wealthy would be able to afford fast internet. Small businesses would have a harder time catching up to major corporations because they would not be able to pay as much for speedy access. Also, people argue that the free market will not be able to sort this out because companies like Comcast have a complete monopoly on the internet and no new company will have the funds to lay out the infrastructure to compete. I am in favor of net neutrality. I would implement the rules that existed under the original net neutrality act. Internet service providers would not be allowed to block any legal website or app, purposefully slow down the transmission of any legal data, or have fast and slow internet speeds based on how much one could to pay. These are tangible things that can be checked for and violators would face legal repercussions to the point where it would not be worthwhile for internet service providers to break them. I don’t see this as over-regulation because internet access is a utility and equal access is a basic right. The vast majority of jobs require some form of internet access-- to clock hours, to access tax forms, to keep track of inventory, etc. People use it for educational purposes, emergency services, healthcare, and law enforcement all of which are uncontroversially public services. Why would the postal service be a public service and not the internet when the vast majority of long distance communication is done through the internet? In this day and age, internet access is just as much a necessity as electricity. It’s the government's job to ensure our basic rights are met and for the reasons presented above, that includes ensuring fair access. I do not see net neutrality as preventing innovation. It allows all businesses to compete for consumers’ attention and the best content will win. Smaller companies being able to usurp bigger ones makes the bigger ones innovate to stay at the top and the smaller ones innovate to differentiate themselves. It has been argued that getting rid of net neutrality would help innovation because with it providers have no incentive to improve their service. I don’t believe this however because companies like Comcast and Verizon are still fighting amongst each other and would want the best product possible. I do not believe in an unbridled free market in this case because of the high infrastructure costs to get into the field, and the near monopoly a few companies already have. Corporate Personhood is the idea that a corporation should have some of the same rights and responsibilities as an actual person. In the article “How Corporations Got the Same Rights as People,” the author Kent Greenfield explains the legal ramifications of this. Specifically, according to the supreme court, corporations are legally entitled to things like free speech and religious expression, but “they … don’t go to jail when they do something criminal.” He gave a very relevant to Notre Dame example of the ramifications--Burwell v. Hobby Lobby Stores where the Supreme Court ruled with Hobby Lobby and held that non-publicly traded companies did not have to comply with the federal law because of religious grounds. The article “If Corporations are People They Should Act like It” gave the example of the New York Times being allowed to publish the leaked Pentagon papers because of their first amendment right to publish.
Socially, there is some good that can come from the idea of corporate personhood that's mentioned in the same article. For example, it cited the Deepwater Horizon oil spill disaster. Where crude oil was spilled all over the Gulf of Mexico. No one person in the world would have the money to fix this disaster, but because of corporate personhood, the corporation was culpable and held accountable for what it did. Looking at the Hobby Lobby case mentioned above, I personally also see this law being used for social bad. Specifically, wealthy shareholders being able to project their personal religious beliefs on low-income, female employees who are entitled to access to contraceptives by the affordable care act. In my opinion, the ethical ramifications of corporate personhood depend on the rights afforded to the company. For example, the oil spill is an example of ethical good-- somebody(thing) is held accountable for a lot of wrongdoing. Justice is served, and good is maximized. At the same time, I would say the ethical ramifications of the Hobby Lobby ruling are frightening because they allow the wealthy owners of companies to make ethical decisions for thousands that should come down to individual people. Finally, the New York Times publishing the pentagon papers would be another example of good because the stated purpose of newspapers is to inform the public. If the corporation is acting in line with its stated purpose, then I believe in corporate personhood. If it is acting in line with the few who run it, I do not believe that constitutes corporate personhood. A case study of corporate personhood can be found with IBM and the Holocaust. According to the article “IBM and the Holocaust” by Ian Black, in 1933 IBM aided “Hitler’s program of Jewish destruction.” It’s likely it would not have been possible to nearly the same scale without them. They created censuses to track Jewish people. They created thousands of machines to identify anyone of Jewish blood, economically cripple them, remove them from their homes, transport them to concentration camps, and coordinate train schedules so “victims were able to walk right out of the boxcar and into a waiting gas chamber.” Leaders of the Germany IBM branch were “rabid Nazis.” The American headquarters knew all of this and didn’t stop it. I firmly do not believe that IBM was ethical in doing business with Nazi Germany. They knowingly contributed to the mass murder of innocent people. The question “should corporations be responsible for immoral or unethical use of their products?” doesn’t even apply here because the products were customized for the very purpose of the unethical behavior. They were specifically made to locate and organize the murder of Jewish people. Had IBM had created a punch card machine that they did not specifically market to Nazis I would not have issues with them-- terrorists use laptop and no one is saying Microsoft or Apple should be condemned-- but that is not the situation. Corporations should 100% refrain from doing business with immoral or unethical organizations because by doing so they would be knowingly aiding immoral or unethical actions. If corporations are afforded the same rights as individual persons, they should be expected to have some of the same ethical and moral obligations and responsibilities. Obviously a person will have more obligations and responsibilities than a corporation because there are fundamental differences between the two. For example, a person is obligated to its family. In regards to not aiding Nazis, however, I would say that the responsibilities are exactly the same. In some ways I feel like this has more to do with the humans in the corporation not aiding Nazis more than the corporation as an umbrella entity, though. If companies are transparent about it, I don’t think gathering people’s personal information to sell products is unethical. Unfortunately, they are rarely transparent about it. I had a really creepy experience with this last year. At a family party, I had a brief conversation with my 12-year-old cousin about how he wanted an Oculus Rift for Christmas. It was max two minutes long. Later that day, I had an Instagram ad for the Oculus Rift even though I certainly had never googled it. I checked the settings on my app, and the microphone was enabled by default. The app wasn’t even open during the conversation, and it was still spying on me which made me extremely uncomfortable. I’d always known that whatever you search for on Amazon will be a targeted ad for the next couple of months, which I am more or less ok with, but this felt more invasive. I would call this unethical data mining because Instagram 1.) automatically enables the microphone which the app does not need to carry out its stated purpose and 2.) as a consumer, I was not informed that this was happening until it was over. Also, a lot of kids use the app and are definitely being similarly spied on which seems illegal. I believe companies have the responsibility to let end users know what data they’re collecting and how they’re using it.
The trend in the last ten or so years has definitely been away from privacy. In my opinion, there is some information that absolutely should not be collected and some that it would be unrealistic to expect privacy about. The Forbes article “21 Scary Things Big Data Knows about You” by Bernard Marr has examples from both columns. For instance, I would say “Google knows what you’ve searched for”, “your credit card company knows what you buy”, and “YouTube knows what videos you’ve watched” are extremely obvious. Who in the world doesn’t expect all of these? Knowing these things lets the company make the experience of using their platforms better, and any reasonably informed user understands that it’s happening. On the flipside, “Target knows if you’re pregnant”, “Google knows your age and gender”, and “Facebook knows when your relationship is going South” should not be collected or analyzed. This is using information users trusted would be used for the site’s purpose and is instead being used for something different without their consent. I don’t trust most websites to protect their customers from malicious ads, so I use Adblock. A prime example of this is given in the article “What would Kant do? Ad blocking is a problem, but it’s ethical.” The article mentions a hack on Yahoo’s ad network that “infect[ed] millions of Yahoo visitors with malware.” If I am not 100% sure that that is not going to happen on a website, I will be using adblock on that website. The article claims this is ethical because it maximizes good-- millions of people not being infected with malware would have been a better outcome than Yahoo losing ad revenue. I will feel bad about using an adblocker when people are held accountable for malicious ads. According to “Edward Snowden: Leaks that exposed US spy programme,” Snowden leaked that:
While what Snowden did was clearly illegal, a possibly naive part of me thinks his intention in revealing this information was to do good by stopping what he saw as the objectively immoral actions of the NSA. That being said, I think we should make a distinction between his domestic and international leaks. With the domestic leaks, Snowden was informing the general public that their rights as American citizens were being violated by their own government. For this information, I would argue that not revealing it would be immoral. I was particularly disturbed by the “large number” of calls that were intercepted in Washington D.C. after an “error in a computer program” entered “202”, the area code of DC, instead of “20”, the area code for Egypt. How in the world did they not instantly notice something was wrong when the people on the tapped phone calls were speaking perfect English and not Arabic. Also, It’s kind of convenient that this accident happened with D.C. of all the possible area codes in the world--the place where the vast majority of lawmakers live. I don’t see any of this as the government protecting its citizens. With the international leaks, however, I think the situation is a lot less black and white. Snowden could have and probably did severely harm the security of the United States. According to the Newsweek article “Why President Obama Can’t Pardon Edward Snowden”, a damage assessment report done by the Pentagon on the leaks found that he compromised “secrets that protect American troops overseas and secrets that provide vital defenses against terrorists and nation-states.” Imagine being an enlisted soldier who happens to be deployed in one of the countries that Snowden revealed the US was snooping on. It’s such an incredibly short sighted action that could cause a large amount of harm to a large amount of people. If we want less war, we want the U.S. to have less enemies. I agree that the measures the NSA takes internationally are extreme, and I definitely wish they weren’t happening, but I think it’s naive to say that this should all stop overnight . Governments have been spying on each other since the beginning of humanity. The U.S. is just the most recently caught. People around my age have more or less grown up with the fact that we have no real right to privacy. The Patriot Act, which seems like a precursor to the current political climate, came into effect when most of us were 4 or 5. I can’t speak for everyone, but for as long as I've been using computers and phones, I've never had the illusion that the government had reservations about spying on me. I would like to live in a world where they did, but I don’t see that happening in my lifetime. Ultimately, I have trouble seeing Snowden as a hero. I think it’s weird that he stole so much information and he only released a small fraction. That tells me he doesn’t think the public has a right to know everything--just what he wants them to. I also think it’s strange that he released things in bits and pieces. If it was truly about just getting the information out there why would he do that? Also, according to the Newsweek article, he specifically took a job at the Booz Allen Hawaii office to get access to top secret information. I was previously under the impression that he was a normal guy who stumbled upon egregious information that spurred him into action, but it turns out he was actively seeking that information from the start. I also don’t love that he is almost certainly interacting with Russian intelligence personnel in some form. For these reasons, I do not think he deserves a pardon. Attached is a guide outlining the job interview process for Notre Dame Computer Science and Engineering students. It gives input on when to start, how to prepare, helpful resources and extra curriculars, networking, and negotiations. ![]()
|
ArchivesCategories |