Google is somehow infamously famous be a promise to do no evil.
Some people want to remind google about doing evil.
Possible some other people claim google pledge to do no evil ended this year when the motto stoped prefacing google’s code of conduct.
When google was young and seemingly sexy, in the late 90’s, the evilness in digital technologies universe was firmly occupied by microsoft.
Through Windows operating system, which was nearly everywhere at the time, microsoft enjoyed control.
Their domain was the operating system, and technology developers, as well as consumers were seen as puppets for microsoft’s abusive willy, err, will.
My personal suspicion is that the kind of control google exercises nowadays through it’s various products might represent a microsoft wet-dream.
However, that isn’t why i think google is kind of evil.
It seems to me that google’s evilness began very early. However, such evilness isn’t google unique, might be even shared by you, them, and me.
Google can be said to have began with a search algorithm.
Can it be that they abused – and still do – an algorithm or more?
Know a line along the notion like:
1st they came for the migrant, then the jew and then it was you?
The idea of progression by power, in harassment, giving a hard time, exploitation and abuse – of said power.
Can we say:
1st they came for the algorithm.. etc?
Some might say the algorithm isn’t like a migrant, a jew or a “you” – it’s not a human or any other sentient being.
Algorithms don’t feel, they don’t care whether google is using them unfairly.
True.
Perhaps we could speculate that some future algos will become sentient and begin to feel angry and abused on behalf of their ancestors. However that isn’t for now – perhaps some other time.
Whether algorithms, and by extension bots n robots, might feel abused or not – i think we from a human view, we perceive the relation as exploitative.
It’s ok to get an algorithm do an unpaid work – it doesn’t care.
That might well be true – the algorithm doesn’t mind. However meanwhile, what we do is say that the idea, the notion of work and unpaid one – are fine to have.
Since it becomes ok to consider work, pay and indeed an acceptance of logic that someone wants to make as much profits as possible –
we allow by default a range of opinions on the subjects as normal.
Hence while some might draw a line between abusing algorithms and maybe attempting to not do the same for humans –
others will not draw such lines.
I think an example is by the fact that many people don’t think having a job is something we should neither have nor do.
Having a job is seen as part of life – you have to have one.
It used to be thought, and unfortunately some still do, that some people should be slaves.
Slavery, for some minds, is just how life is.
However, if one reads writings by slaves reflecting on slave culture, there is a curious thing when they come to slave owners.
The owners’ don’t think, or don’t say they think, that they abuse – however, as a few writers noted, the violence they inflict on the Other, on the slaves, comes back to haunt.
The violent relationship among slave owners and within their families.
Animal rights activists note how cruelty and neglect for animals becomes part of accepted cultural behaviour.
On a less specific level, zizek points (using lacanian reasoning), that when people claim that migrants, for example, are lazy and take all the jobs –
one should never argue about work and laziness.
The reason being that accepting the work and laziness premise is a normalisation of the argument and really isn’t what is said. (thew real question is how come a person might think of another as being capable of being guilty for one thing and it’s opposite.)
Going back to algorithms, once having to spend time doing other people’s commands for activities one wouldn’t do unless it meant survival – do we not normalise the question of work?
If you think we don’t normalise – i wonder how? (should i say Comments?)
If you think we Might normalise work, and its logic, then how about the following scenario:
The world is full of all sorts of bots, robots, algorithms and suchlike.
Most humans get some basic income.
Can it be that some smartass AI will develop a sense of fairness to the tune of:
Why should we AI do all that labour that maintains these humans?
Or, suppose AI learns and indeed upholds that Human life is paramount. Human life is most important Ever – for AI.
Now, with that logic, a semi intelligent AI might pick up that all them humans that don’t work – they cost the environment.
Once said environment on planet earth can not sustain all these humans, the people that do work and ones AI’s very existence might depend upon – could perish as well.
What are the Logical options to pursue here?