Project 4

Group Members:  Dinh Do, Chris Ray, Nathan Vahrenberg


The gang gives their thoughts on The Matrix.  Note that we make multiple references to The Animatrix regarding the backstory of the entire Matrix series.  And yes…  We know that this is actually Project 4…

 

Project 4

Troll Toll 2: Electric Boogaloo (Reading 12 Response)

Trolling.  Ever since I discovered the wonders of online communities and the “interwebz” it seemed that every time I dug deeper into the myriad of comments, forum posts, reblogs, etc. you invariably run into some form of trolling.  Some saw it as a bit of slightly mean spirited fun, drumming up a little entertainment from the reactions to their name calling, rude remarks, and general annoyances not unlike the most annoying sound in the world.  Others saw it as an art or testament to their wit:  luring victims into conversations or debates that ended in logical frustration and or embarrassment.  At the end of the day, a troll is pretty much a person who posts content (usually but not necessarily under the guise of anonymity) in an attempt to draw out reactions for their amusement.  Unfortunately, there are times when the act of trolling crosses the line and becomes harassment, with extreme cases involving stalking, sexual harassment, death/rape threats, and even impersonating dead relatives.

Trolling that crosses the line into harassment territory is about as ethical as more traditional bullying (you could even go as far as to call it bullying by the people who used to be bullied).  Schools and universities have an obligation to suppress traditional bully behavior, so it makes sense that websites such as Twitter, Facebook, Reddit, etc. have a similar obligation to try and suppress/report online harassment.  However, technology companies are in the precarious position of having to meet this obligation while maintaining its users rights to freedom of speech/expression.  While the current systems of allowing users to block other users or reporting abuse do maintain this balance, they are limited in the fact that (1) the most persistent trolls will still find ways to get to users who have blocked them, (2) requiring a review process of every abuse report slows down response times, and (3) reviewing abuse reports is still subject to errors from human subjectivity as to what constitutes harassment.

Some companies like Google and Blizzard Entertainment (For the Horde!) tried to help deal with online harassment by requiring users to use their real names.  Their logic was that removing the veil of anonymity that allowed trolls to be brazen and act with relative impunity would force people to act a little more well behaved.  It didn’t work.  While some users praised the addition of such features claiming that they had no reason to hide behind a screen name, most reacted negatively for reasons such as wanting to maintain a roleplaying experience (in the case of Blizzard’s games), wanting to maintain anonymity due to hostilities in their home countries, or just simply not wanting their online personas and their real lives to intersect.    I personally side with the latter argument, seeing anonymity as both a blessing and a curse.  Not only are real name policies ineffective at stopping harassment (just look at the harassment that takes place on Facebook), being anonymous or operating under a screen name allows for some form of escape from regular life.  I can go online and interact with communities without having to really worry about any baggage or issues that come for real life.  With this freedom, or any other freedom however, comes the choice of using it for good or for evil.  You in essence need to let the scum live in order for everyone else to prosper.

I used to be a troll, albeit one who preferred to be a minor annoyance as opposed to someone who sought to harass other people.  That form of trolling is not really an issue, and even if it does annoy you there’s always the timeless adage of “don’t feed the troll.”  The real issue is trolling that has crossed the line into online harassment, which itself can be a reflection of even larger social issues such as racism, sexism, etc.  These trolls are harder to deal with since (1) they tend to be more persistent, (2) ignoring them makes you an emotional punching bag, (3) fighting them may only add fuel to the fire, and (4) their posts/threats cut much deeper emotionally than the most annoying sound in the world.  The only way I can see to deal with them is to just educate people and have them realize that such behavior cannot be tolerated and should be openly mocked.  At the end of the day these trolls are people on the other side of a computer screen.  People get angry.  People yell.  People get bored.  Most importantly:  people can be embarrassed.  There is no better feeling than watching a troll flailing around for all to trying to defend their views.   I will admit there is a bit of irony here, using trolling to combat trolls.  I guess we really can’t escape that precarious balancing act between good and evil when it comes to the Internet.

Troll Toll 2: Electric Boogaloo (Reading 12 Response)

Reading 11 Response

Artificial intelligence is the field of computer science that focuses on developing computers capable of performing tasks normally performed by humans, especially tasks perceived to involve some human thought.  Based on this definition, artificial intelligence imposes no restrictions on how the computers perform these tasks.  A computer running software that solves a problem typically done by humans in a way that is completely different from how a human would typically approach the problem is still said to exhibit AI.  In fact, there are 3 distinct camps on how to develop AI based on how much of the computing is modeled on human thought.  Strong AI focuses on genuinely simulating human reasoning within machines in order to the perform the tasks.  Weak AI focuses on just getting the machines to perform the task in any way possible.  The 3rd camp is a sort of “in-between” for strong and weak AI, focusing on creating systems that are inspired by human intelligence.  This 3rd camp is where most of the modern advancements in AI are taking place.

The big thing that separates AI from human intelligence is the fact that most AI are programmed to perform very specific tasks, while human intelligence is more general and capable of helping us handle the wide variety of challenges we face in everyday life.  Take IBM’s Watson as an example.  It’s designed recognize patterns and weigh the probability that it has properly recognized the pattern.  While that may be helpful in winning Jeopardy or identifying health problems, pattern recognition is still only one task that true human intelligence performs on a daily basis.

This doesn’t discount the overall viability of AI.  Platforms such as Watson, AlphaGo, and Deep Blue have made great strides in developing novel ways to push the boundaries of the types of problems computers can solve.  While they are still limited by the fact that they were created for very specific tasks, there’s nothing stopping us from adapting them for more practical, commercial use such as Watson being used to aid in medical diagnosis.  And while there haven’t really been as many great strides in strong AI, continued research in the field will not only help us create machines with the potential to rival human intelligence, but also ultimately help us understand how our minds work as well.

But what about those AI that have supposedly made the transition from simple calculation to what pretty much can be considered thought by passing the Turing test?  While platforms like Eugene Goostman, the AI poet, and lamus, the AI composer, are all impressive in their own right, I do not think they should be viewed as heralds of the age of machines having minds.  I do believe that the Chinese room is a good counter argument to the Turing test as its commonly employed.  While Eugene Goostman is capable of fooling some judges with his generate conversations, it does not fully understand the meaning behind the words it produces (and can still be fooled if given the right input).  While Apple’s Siri is capable to responding to human speech in a way the mimics a conversation, it’s still just code reacting to an input as opposed to a mind that genuinely understands the information it’s processing.  I guess what I’m trying to get at is that the Turing test would be a much better measure of intelligence if it took affect and simulated emotion into account as opposed to just being able to carry on a simple conversation.  If a machine is capable of somehow expressing joy upon seeing an image of the sunrise or feeling some form of companionship a la Her, who’s to say that the machine doesn’t have a mind of its own?

I do genuinely believe that some computing systems will eventually be considered minds, albeit minds that may function in ways different form our own.  Think of it this way:  suppose I built a human from scratch using the same biological materials, programmed its mind by implanting memories and a personality to where it would interact and react with the world like a natural born human, and then set it loose in the world.  People interacting with my human would probably interact with it with no issue and would have no doubts that it has a mind with intelligence.  Why should a computer be any different?  What makes substituting the biological building materials for plastic and silicon and the mind with an OS so damning?   The ethical implications of this idea would involve redefining what we commonly perceive as thoughts, minds, intelligence, and maybe even what constitutes a person.  On a larger scale I think it involves us as a species stepping down from our existential pedestal to realize that we are not the “be all end all” of this vast empty universe.

Reading 11 Response