Big Data Was A Mistake – Wisecrack Edition


Hey Wisecrack, Michael here. Let me ask you
something: how is it that YouTube correctly guesses, time and again, that I need more
Dr. Gregory Johnson in my life? “Make sure you keep your teeth together,
and closed.” (crack)”Yeah.” In other words, how and why does YouTube use
information gathered from search history—say, a one-time query for “EPIC BACK CRACKING”—to
send us down rabbit holes filled with highly specific content, whether that be chiropractic
adjustments, zit-popping, or ASMR? For that matter, how do social media sites
take into account my friend groups, stated political and entertainment preferences, and
browsing habits to serve up ads that are annoying but often chillingly relevant? The above are well-known examples of “big
data” in action. But, it’s not all ring-dingers and soap-cutting. While big data has undoubtedly
provided us with some cool and useful things like predicting disease outbreaks or cataloging
the Milky Way it’s also influencing your life in far more subtle, and often spooky,
ways. Big data can and has been used to: predict
who will become pregnant, in order to more effectively target ads; diagnose depression
remotely—again,again for the express purpose of marketing goods and services; determine
political leanings; evaluate teachers, and make predictions about when employees might
quit. These and other big data uses are squirrelly
enough on their own. But while most people are just creeped out by the general concept
of some modern-day Big Brother creeping on their porn browsing habits, we often overlook
one of the most important parts of big data: its organization. We may balk at how it’s
gathered, sure, but what happens after that? How do you take a sea of data and turn it
into an efficient, ring-dinger-delivering-machine? And how does the answer to that question pose
one of the biggest problems for society in the 21st century? Welcome to this Wisecrack Edition on Big Data. In 2020, it’s fairly common knowledge that
internet giants like Google, Bing, and Yahoo, and Facebook gather user data through search
engines, user profiles, and other touchpoints. But there are many other data collection hotspots
available to companies looking to rake in that sweet, sweet marketing insight dough. Many of the largest internet companies also
gather data through subsidiary services that the average user might consider a bit more
private — think, for instance, of your email inbox, your phone’s location data, or the
high-concept Mortal Engines slashfic you have saved on Google Drive. Like one traction city eating a smaller traction
city, tech giants gobble up user data at every possible opportunity—and the practice doesn’t
stop with just Google or Facebook.. “Data brokers” such as Acxiom, Equifax,
KBM, and credit card companies buy consumer data from smaller companies in order to repackage
and sell it to larger companies. Certain health information, collected by insurance companies,
is also bought and sold, as are data points gathered from: “quantified self” gadgets
such as Garmin; retail loyalty card programs; satellite imagery; employer databases; bank
records; and records of online gaming behavior. To oversimplify a bit, companies collect this
data not to improve consumer experience, as they often claim, but to provide more and
better direct marketing opportunities to advertisers. The end goal, in other words, is profit—which
only happens to coincide with a more streamlined consumer experience. For instance, big data isn’t used to create
games which have more entertainment value; it’s used to create games which keep people
playing and paying, through both subscription fees and microtransactions. In a more grisly example, when data analyst
Alex Pentland managed to obtain anonymous location data on credit card usage, he and
his team were able to use it to predict with a high degree of accuracy which card users
were experiencing depression. They’d found that depressed folks tend to
travel shorter distances when making purchases than those without depression. So the tighter
the circle of card-usage, the higher the probability the card user was suffering from mental illness.
Pharmaceutical companies were able to use this information to start marketing directly
to specific individuals. Now, people have long had their qualms about
whether this sort of data should even be gathered in the first place: “You took my sonar concept and applied it
to every phone in the city. With half the city feeding you sonar, you can image all
of Gotham. This is wrong.” Rather than dwell on whether or not it’s
ok for someone to watch us poop through our smartphone cameras, we’re going to focus
on what people are doing with that data that instead. The basic problem is that people are organizing
data in a way that’s entirely opaque to the outside world, and deploying it in a way
that could profoundly alter your life. All this is sold to us on the basic assumption
that something an algorithm tells us is objective and true. That decisions made by them are
“proper” and “natural.” After all – who can argue with basic math? Spoiler alert:
this guy. But hold on: isn’t data objective? You’d certainly think so. After all, raw
data is just that; objective facts. Facts like: how many Americans own dogs, or how
many times it snowed this year. But as you might have guessed, knowing exactly how many
bowel movements occurred in America in 1987 does not a data scientist make. To perform
its function as a tool for framing the present and predicting the future, data must be organized
to tell a particular story: we need to connect the dots. This is a necessary part of interpretation.
Isolated facts have to be turned into narrative—and that involves picking and choosing what data
matters to this story and excluding data that doesn’t. It also involves interpreting how
those different pieces of data connect. For example, the number of new Nicolas Cage films
correlates to more people falling and drowning to death in swimming pools that year. Is this
something I should consider for my next pool party? Whenever any act of interpretation happens,
it becomes “one person’s interpretation” vs “another person’s interpretation.”
For instance, where one lawyer sees standard business activity, another sees “light treason.” “There’s a good chance I may have committed
some…light treason.” One interpretation can benefit one person,
and screw another. So in the world of data, who is benefitting, and who is getting boned? “I can’t see what’s happening. Are we
boned?” – “Yeah, we’re boned.” You’d think opportunity for said boning
would be minimal—after all, this is data we’re talking about, right? How can objective
facts be unfair or biased? Data scientists can’t make 1+1=3. The problem, however, is that big data stops
being purely objective once humans get involved—which is to say, 100% of the time. That’s because
those humans have to interpret and organize data, or program a computer to interpret it
how they see fit. Data is organized by algorithms and models.
When these models are constructed well, they arrange data in ways that provide incredibly
valuable information. When they are made poorly, they can be misused to prop up self-fulfilling
prophecies that screw over a ton of people while enriching their creators. Let’s have
a look at the difference through the lens of what some people might say is the world’s
most boring sport: baseball. If you’ve ever read Moneyball or seen that
Jonah Hill movie, or are just a sports fan , you may be aware that baseball uses a ton
of data. Many of us are vaguely familiar with RBIs and batting percentages. But there are
plenty of other data points to collect over the course of a game—enough, even, to predict
who is more likely win or lose. In other words, it’s possible to use data to model a player’s
performance. And often times, these models determine who gets hired or fired in baseball.
Seems cold? Well, it’s actually not that bad. That’s because these models are transparent,
players are fully aware of them. They know, for instance, that their on-base percentage
is low, that their managers are considering trading them as a result, and that they need
to bring it up if they want to keep their job. “So you want us to walk more?” – “Good
question. Yes.” Compare this to, say, the YouTube algorithm,
which promotes and demonetizes content in a proprietary manner that is completely inaccessible
to the creators being judged by it. Or teachers, who in some cases have been fired as a result
of “value-added modeling” without even understanding how that modeling works. Algorithms and models are also being used
to give out loans and credit cards, or to deny or approve parole for prisoners. As mathematician
Cathy O’Neill notes in her book Weapons of Math Destruction, the proprietary nature
of these models means that, unlike statistics in baseball, they are not open to scrutiny So if I make claims about what kind of players
with what kind of data will score home runs and win games, people can laugh me out of
town when those players fail spectacularly. People make predictions, and then are held
accountable if those predictions don’t play out. Models are tweaked, people are fired,
and so on. But if a mysterious model says that these teachers will make your kids dumber
and these teachers will make your kids smarter, no such scrutiny generally exists. As O’Neil points out, “Companies go out
of their way to hide the results of their models or even their existence”. Such well-defended
company algorithms form a massive, ever-present black box in our day-to-day lives: we know
what goes in, our data, and what comes out, organizational decisions, whether they be
marketing strategies or police patrol patterns – but the average person has no idea what
goes on in the middle. What’s important is deciding which factors
go into making a model, which by necessity is always a simplification, not a perfect
representation of the real-world process we want to analyze or predict. Done right, modeling cuts out a great deal
of noise while offering useful predictions: for instance, avionics software is useful
to pilots and air traffic controllers because it tracks airspeed, wind direction, and the
locations of landing strips, while excluding other potential data points like building
locations, street names, and restaurant reviews. Google Maps does track those things, and not
others, making it a useful tool for someone trying to make their way to Taco Bell.. Models are created by people trying to achieve
a particular goal, like winning baseball games. But people are inevitably influenced by their
own ideologies in the way they go about collecting data—and in the stories they choose to tell
about that data. As O’Neil puts it: “Models are opinions embedded in mathematics.” Again, it isn’t numbers or data points themselves
that can reflect bias, but the ways in which we interpret large numbers of data points.
This can be further aggravated when we use those interpretations to inform what data
we’ll gather in the future. If we’re not careful, we can wind up creating algorithms
and models that are designed, consciously or unconsciously, to reinforce our existing
worldview. This is particularly important to keep in mind today when, as O’Neil explains,
poorly made or deliberately misleading models can “encode human prejudices, misunderstandings,
or biases into the software systems that increasingly manage our lives”. This bias already has a significant impact
on jurisdictions around the country that use PredPol, a predictive policing service. In theory, PredPol uses raw data on crimes
committed in a given area to predict when and where crime is likely to occur again—think
Minority Report, but with less hand-waving and more math. In practice, PredPol has been shown to set
up an unfortunate “feedback loop”: in response to a crime, it sends more officers
to a given neighborhood, increasing the likelihood that those officers will find and report someone
committing a crime there, which will in turn result in more patrols. This, in turn, raises
the neighborhood’s profile as a hotbed of criminal activity in the eyes of the algorithm,
and therefore the police department, even though the actual crime rate has remained
constant. So if you’re the type to enjoy a leisurely stroll while puffing on a “jazz”
cigarette, you’ll probably be fine assuming you live in a neighborhood that is wealthy
and white. But if you live in an area where Predpol has sent increased patrols because
an abundance of Pickle Rick graffiti, you might just get arrested. Then consider that
some research suggests sending people to prison makes them more likely to commit other crimes,
and the feedback loop gets worse. The issues with modeling are compounded even
further by the fact that most of us have no idea what the hell is going on during the
design process. Big data models use data to create very specific stories about people
and groups. But those algorithms are not created by, or understood by, the people they’re
telling stories about. This is not an accident. Unlike baseball players, who can use their
publicly reported stats to focus their training efforts and improve their performance, we
can’t readily access company-held data that might help us improve our own lives. For instance, what if we knew the factors
that, statistically, might sabotage our diet plans — like a bad day at work or walking
past a delicious pie shop on our way home. That knowledge could empower us to make different
choices or change our habits. BUT that would also result in us buying fewer pies, which
would, in turn, be very bad for Big Pie. In other words, far better to keep companies
profitable, while we are ignorantly belching up boysenberry. Consumer algorithms and models
are valuable to companies precisely because they are hard for the average consumer to
access and interpret. But it gets even worse, because the little
that we do know about big data – namely, that we are being constantly tracked—still can
have a tangible impact on our behavior. The reason has to do with privacy, which law
professor Julia Cohen defines as “shorthand for breathing room to engage in the process
of…self-development.” Big data takes a giant meat cleaver to that breathing room,
which Cohen argues means that we lose the ability to choose which parts of our life
we share with others. So maybe you’re not quite ready to publicly try and fail at writing
your first murder mystery, but would love to do so privately. But, in the age of the
internet, you can’t watch a video on arsenic poisoning without YouTube making judgments
about who you are as a person. And with the power of recommended videos, it will likely
serve you content that will slowly turn you into that person. You watch one video of feet,
and forever the algorithm thinks you’re a weird feet guy. When our every embarrassing
selfie, search query, and online purchase is logged on a server somewhere out in Silicon
Valley, we are reduced to our foibles and shortcomings. “Soon my little Box will be on countless
TVs around the world…feeding me credit-card numbers, bank codes, sexual fantasies, and
little white lies.” This all seems complicated enough,but trust
me, the situation gets even weirder because an algorithm that can be genuinely useful
in one context can cause lasting harm in another. So what happens when a highly-specific algorithm
goes mad with power? “Unlimited power!” You get the credit score system. It was originally
meant to determine whether or not the bank should lend you money to start a fondue chain
or something. Today, credit scores are also used to make decisions about housing applications,
and even employment. So if you missed a credit card payment because you lost your job, good
luck finding a new one. The gathering, organization, and implementation
of big data raise some thorny questions around the issue of free will, which both Aristotle
and Thomas Aquinus believe is one of the core traits that distinguishes the human from the
animal or the machine. Knowing what we do know about the role big data plays in our
lives, how do we reconcile the fact that so many of our decisions seem to be beyond the
reach of our conscious minds? To put it bluntly: how do we make free choices about where and
how we work if theoretically banking models, browsing history, and likelihood of pregnancy
or depression can keep otherwise qualified candidates out of job pools? How do we exercise
our free will in our social lives when Facebook and other social media companies control what
we see in our feeds, and who we are most likely to interact with? Under the circumstances, how do we stay human? Philosophy professor Colin Koopman believes
that we may require whole new fields of study to find the answers to these and other questions
surrounding big data. Koopman points out that, to date, our society has always played catch-up
with big data. New data-based technologies are allowed to emerge pretty much without
restraint, and only after the fact, and rarely, do we put regulations in place to control
them. To put it another way, we’ve taken a ‘develop first, ask questions later’
approach to data science. To form an idea about how we might move forward,
Koopman looks to medicine, law, and other complex fields with entrenched ethical codes.
These codes help consumers because if, say, your lawyer snitches on you, they can be disbarred.
In a similar way, data science—and the millions of us affected by it every day—may benefit
from commonly held, enforceable standards. Or at the very least, it might encourage brands
to stop hovering over our shoulders like a bunch of creepy street-hawkers. “ You wanna buy a sundial?” So what do you think, Wisecrack? Will we be
able to turn big data into a transparent tool used for the benefit of all? Or will biased
algorithms continue to enable unfair treatment under the guise of math and science? Is there
another way forward that we haven’t considered? Let us know your thoughts in the comments
below, and as always – thanks for watching. Later.

100 Comments

  1. Ideally it should become a human rights violation because we are forced to give our data to be able to minimally participate in society (get a job, have a home etc). I think there are lots of solutions to big data, that's not the problem. The issue is that it's too profitable and powerful for our governments to agree to stop it.

  2. YT hasn't shown me anything new in years. It's like they think I'm in an ideology and must only look at things I agree with. Never shows me anything new or random. I find out about new channels that have existed for years by other creators not Google.

  3. Exploitiation will keep going in a capitalist system, so there's incentive for people who have power not to put silicon valley under scrutiny. With eveything that happened since 2013, it doesn't really seem like any of the past controversites harmed any of these companies at all.

  4. 11:38 but isn't this good? No one should get away with crime regardless of ethnicity or geographical location. Theoretically, this system should transform our worst crime locations into the safest places as law abiding citizens get faster response times and local criminals are sent to correctional facilities. If it isn't working, then isn't the problem with our criminal justice system as a whole, not this algorithm? Our jails should be rehabilitating criminals. Our police officers should make our community safer.

    You can't blame a functional tool that's failing because everything else around it is broken.

  5. at the 15 minute, you almost got it. getting targeted ads and suggested video's is one thing. the next thing is that the system will get to know you, they already tests you. once it knows you enough, it will be used to push your buttons and control you. Making you make decisions against yourself. removing the minimal freewill we have. Ending sovereignty.

  6. The ads I get are always about shit I don't care about because of how I use the internet. Once in a while I'll get an ad in Japanese and actually interest me since I'm learning the language. Now let's see if I get more ads in Japanese

  7. Privacy is a right – not a product
    Put big tech ceos in jail until they agree to stop holding our data hostage

  8. "Is there another way forward we haven’t considered?" – Yes, there is.

    A huge part of the problem here is the business model. So many of these services, especially Facebook and Google, rely on third-party advertisers to monetize your data. If there was a device and software ecosystem (let’s call it “Life OS") that you subscribe to directly, then third-party advertisers are no longer a variable. You no longer have to serve outside interests.
    Imaginr if your smartphone, your wearables, and your smart home devices we’re constantly gathering your data, but instead of serving those up to third-party advertisers, they optimizing your life along the vectors that you dictate. In twelve months I want to be 10% more productive 10% more rested, and 10% happier. I want to increase my sleep, my sex life, and my income. Is there any reason with all of the data being gathered abouy me that all of those couldn’t be optimized? I submit that they could. The only reason they are not right now is because there is no business model supporting it. A Life OS direct subscription business model would totally change the consumer technology landscape.

  9. Yeah I remember bein baked one day and put a Simpsons clip to have fun. Two months later I still get daily suggestions about Simpsons clips.

  10. I'm sorry boyfriend but you have no fucking idea what the hell you are talking about. A machine learning model is only as good as the data it is provided. And the key is also pre processing the data appropriately depending on the model to be used . Also what the fuck is big data? There is no appropriate definition for something like big data. But that did not stop you from making an ignorant video on some shit you have no clue about.

    No machine learning model out there needs this "big data" they need data. In fact some machine learning algorithms get a bigger bias the more data you give them. And I assure you those engineers implementing these algorithms are not as ignorant as you are.

    In conclusion this is what you get when you give an art/philosophy major access to numbers, they start thinking they actually got an education instead of thinking the truth in that they wasted their time with a useless degree.

    Eat a dick you misinformed moron!

  11. We barely understand how NN works and so there is no transpercy even if u know the algorithem because the creator didnt knew

  12. as long as we let rich assholes make free money ain't shit gonna change. think if you got paid for each bit of information companies bought about you.

  13. Another great video to be added to the reasons why we need blockchain technology to become mainstream and bitcoin to succeed.

  14. I agree that bias in models and even the way we gather statistics is a problem, but I can't help but notice that Wisecrack is revealing their own biases here. All the examples here are coming from a left wing point of view. As a right wing example, the claim that women make less money than men is scientifically and statistically illiterate. So is the claim that one in four women are raped on college campuses. People are lying to you with statistics all the time on virtually every hot button issue. The problem with regulation is the question of who watches the watchmen? Who gets to decide what data is private and what must be shared? If you think big pharma has perverse incentives, imagine what it would be like for politicians to make those decisions. Mike Bloomberg is already pushing laws about what you're allowed to eat. Perhaps most importantly, regulation can't solve the problem of human bias. No matter how transparent the statistics are, you'll still have people doing junk science and activists trying to spin results in ways that favor their beliefs.

  15. Capitalism: creates algorithms that can predict individual's behaviors and manipulate them for profit
    Me: installs adblock

  16. Bring. Back. Jarred, please. I actually like the new hostes don’t get me wrong, but I really miss Jarred – he was so great!

  17. It always bothers me when people complain about data companies using data to make money. Like, that’s the entire point of a business. Is it fucked? Yes. Is it a revolutionary idea? Nope.

  18. So they're doing a gender bending reboot of big momma's house – which female actor woukd you have star in the film?

  19. I've finally read a book before it showed up in a Wisecrack video, or has Weapons of Math Destruction been referenced on the channel before?

  20. As a guy doing a PhD in big data, there are two important things I think you missed here. The first is that the data itself is often biased. The famous example mentioned in the video is an algorithm to determine who gets parole. The developers in this case were explicitly banned from using racial data. Turns out though that one of the strongest predictors was a person's zip code. And neighbourhoods are often quite segregated still, meaning that the systemic injustice gets translated into the data unintentionally. Secondly, while we are good at determining people's personality traits, we're actually really bad at using that knowledge to change behaviour. The suggestion that at present we're all pawns of Big Brother without free will is just buying into the marketing of big tech. For the moment anyway.

  21. Well you won't get anywhere with legislation as long as congress is tech-illiterate and in the pocket of huge corporations.

  22. Marc Elsebergs science thriller ZERO thematizes daily surveillance and the gathering of huge sets of datas by big companies. One of these companies uses the data collected from the user, in order to sugget personalized improvements in every part of someones' life i.e. tips for academic, athletic, romantic etc. improvement.
    I have to admit: This form of supported personal developement through big data is very intriguing to me.

  23. 3 quick and easy steps to avoid big tech

    1. go to https://alternativeto.net/
    2. search for an application or website you currently use
    3. change "any license" to "Open Source"

    bonus tip: if look into an application/website and they are a "cooperative" instead of a "company" and open source. That's pretty much the best you can get!

  24. I'm pretty skeptical when I hear something like "Oh, let's get ride of bias on the algorithm", this sounds like code for "let's social engineering a fake politically correct world", because we want to fight ____ [insert here some so-called "noble cause" that liberals want to fight].

  25. Generating thoughtful incentives can help shape the outcomes we receive. Nicholas A. Christakis (https://g.co/kgs/WGugoj) discusses ways that we can organize ourselves so the data moving through us, moving us, that we move through can be shaped to improve outcomes.

    Modern monetary systems give the task of assigning value (through the offering of loans) first to banks who will crumble if they underperform against their competitors. The banks are incentivized to assign value in order to maximize return of value. The current monetary systems do not offer incentives or means for individuals to assign value except after first trading their energy and resources to those the banks have deemed valuable.

    Kevin Kelly (https://en.wikipedia.org/wiki/What_Technology_Wants) discusses how a more coherent view of the forces surrounding us might be achieved by embracing the concept of technologies as independent and interdependent, (as yet) unsentient organisms domesticating humans in order to reproduce, evolve, etc.

    The games and other technologies we employ to build trust, to form bonds, to develop ourselves and our world impact how the choices we can make. Asymmetric warfare against corporations that are more cohesive, have access to more data, and lack incentive to care about consumer health beyond its implications to profits is a game unlikely to generate health or happiness.

    Would we could have games more enjoyable to play, more productive in their outcomes, and competing towards ends the participants find purposeful.

    Brought to by brand. Brand promises! Buy brand!

  26. data is not information, it becomes information when interpreted. so yes, it's subjective. The theoretical framework is VERY important…

  27. The next thing you know, the algorithms would demonetize critical videos such as this one to protect themselves.
    Welcome to a new age of Digital Authoritarian Bureaucracy
    Franz Kafka would be proud to see his warnings taken by us.

    All Hail the Big Data

  28. I always hear "big daddy" every time "BIG DATA" is said…
    Also, is it pronounced "da-DA" or "da-TA" or "day-DA" or "day-TA"??? as a non-anglophone this stuff is challenging…

  29. it's simple just don't put private data on the internet it's not that hard so far I'm in control of what I see because I kinda know how the dumb algorithm works and by making simple choices you can make it do whatever you want. I have to admit that I'm a programmer and more or less paranoid about security so I kind of has the same mentality as the however programmed these algorithms, but again if you don't want Google or Facebook to give you McDonald's ads stop taking stupid pictures of your happy meal and putting it on Instagram it's that simple.

  30. Great video! If you want a deeper dive into some of these ideas I highly recommend this usenix keynote speech from the Harvard University professor James Mickens from 2018 (it's also quite hilarious if you're in the IT world at all): https://youtu.be/ajGX7odA87k

  31. whether it's google, some random adds or deviantart no site has ever been able to pin me down.
    they are only right 1 in every 25 or 30 times, the rest of the time I have to swim through crap I really don't want to touch.

    So how do I make myself an outlier?
    I ask pointed questions through key words, rather than general questions through basic dialogue structure.
    I am not chatting with google, I am performing a query.
    and those questions have to do with the work and or curiosity of the moment, not me.

    as you said you can get into a feedback loop if you give to much of yourself you can become insulated in a bubble of factors surrounding you that can skew the whole and reinforce biases in a way that is as bad and in some cases worse than fake news.

    that and I am allergic to social media, it tends to agitate my allergy to stupid. I have violent fits.
    and facebook has locked my account multiple times for being fake because I don't play the game.
    I don't chat with strangers or browse for random crap and only check my account once to twice a year.
    I only have one because some other things will only allow you access if you can link to FB

  32. Excellent video except for the chiropractic scammers twisting people's bones. Food for thought on the power we've relinquished to data gatherers.

  33. To add to this: collected data is also used to train neural networks, some of which are then used to improve data collection and analysis.

  34. I agree that ethics-based regulations should be considered and written proactively rather than reactively. The issue with that, however, is that it's not a profitable model for companies, so good luck getting such an attitude to be adopted by the majority of congress.

  35. Before I tell you about how big data is being used by advertisers and worse to manipulate you at the very core of your soul, let me shill for some crappy fucken earbuds.

  36. Big data feels like an inevitable progression from record keeping. I think it’s a natural development in an environment where intention exists. The question is not whether it’s right or wrong. That’s subjective, and determining that it’s wrong will do little to prevent its use and progress. The real question is, how can we use it better? We’re recognizing risks and problems. We need to use that to make things better. Just like we do in every other field. And while this field is still young, and regulation for it is still weak, we must also be cognizant about the ways in which it can hurt us. That won’t prevent all of us from being hurt, but it’s the best we can do in the midst of a new technology/practice.

  37. "Who gets boned?"

    Consumers. It's always the consumers that get boned. That's capitalism. Real human beings are reduced to consumption statistics, and then used until they die.

  38. Judt take a course and become a data scientist. It's well paid, you can work from home and nobody can blame you when you're wrong.

  39. The Jim Carrey Batman reference was good, but if you guys had gone with lawn mower man that would have been better. The difference… singularityship

  40. Not to be needlessly reductive, but I'm going to be needlessly reductive and say Capitalism is the biggest driving factor behind Big Data being used to analyze us.
    Olly from Philosophy Tube recently made a really good video on Data, and how your online consent is basically irrelevant to information brokers and governments.

  41. Yahoo! and Bing: "They said our names!! They said 'big DATA like Google, Bing, Yahoo!, and Facebook'…! We are still not forgotten…!"

  42. This kinda stuff creeps me out. We are so far behind in understanding wtf we do online.
    The Singularity Is Near! 🤷‍♂️

  43. Don't use chiropractors kids! They are essentially unlicensed practitioners backed by pseudoscience! *Go to a physiotherapist instead!* Also try Ad Block Plus and Ghostery to circumvent some of the internet tracking you'll encounter while browsing.

  44. Some bad things you did not comment on:
    – we Loose the ability to make mistakes and get back. Essencial part of life
    – many times the profile they make is valued by things the big data contractos want. Like you can be politically persecuted in Jobs whether you are politically engaged or the algorithm deduces you are by things you search
    – the Freedom we Found on the internet, to search, to gather with people, It ALL Goes to waste cos an algorithm Will avaliate you because of that.
    – We should not have to adapt ourselves to the algorithm. It should be made to find specific informativo that se collectvly decided we need and with the Freedom to not be counted with no loss: Facebook, f. Ex., Should be forbid tô unallow a person to use their service If they do not allow explicitly those data to be gathered – and they should only bem allowed to make data to answer to problema society have voted for. Nobody should have the Freedom to collect big data for any reason rather then the one's a people has decided to be collected. It should be on a country level, but also on an internacional level… So companies should not have the right to make big data.

  45. A couple of gross oversimplifications particularly regarding the statistics on crime data.

    Making the argument that an algorithm causes an unfair amount of patrols in a given neighborhood due to crime statistics is one thing, stating that officers will then fabricate crime data is another even if it's statistically accurate because you're choosing to ignore the false-positive data points that are then introduced as an obvious logical result because it happens to fall outside a narrative is an entirely different one.

    It's one thing to call somebody a racist, it's another calling somebody a racist because an algorithm made them do it, I would be careful about how you approach certain issues just like I would state that free will being predictable and therefore quantifiable is a horrendously short-sighted and widely misinterpreted conjecture as humanity isn't entirely logical.

    Food for thought.

  46. bro all this talk about big data and big brother and all this creepy stuff, and that at 7:52 i see my last name in those batting lineup sheets? Natelli..go look for yourself…and im 99% sure there has never been a baseball player with the last name Natelli in history..so why, and how the hell is my last name on that batting lineup sheet??? and the worst part is apparently i went 0 for 3 with 1 k in this game…my mind is blown right now…is that from the actual movie moneyball or is it from something else??

Leave a Reply

Your email address will not be published. Required fields are marked *