Censorship and behavior standardization

Pierre Matile
2 min readMay 16, 2022

šŸ˜ Well thank you very much LinkedIn and Facebook for having refused to publish my add concerning my new book
https://lnkd.in/ey--G6KE
and thank you to my publisher Editions lā€™Harmattan for having chosen this very representative picture for the cover. This is a picture of a retail shop showcase in Bulgaria.

This is the message I received from Linkedin concerning my advertisement attempt:
ā€œInappropriate language or image: Please use appropriate and acceptable language and images in your ad. Do not use language or images that could be considered offensive by any reasonable viewer of your ad. Even if legal in the applicable jurisdiction, LinkedIn does not allow ads that are indecent, violent, vulgar, suggestive or that, in the opinion of LinkedIn, may be offensive to good tasteā€.

This message sent by the AI robot seems to prove one more time that these systems have very limited predictive capabilities unless you consider that a picture of a dummy in a shop is ā€œoffensiveā€.

As LinkedIn and Facebook mention in their statement, it is not so much what would be considered acceptable in my country that is the basis of their refusal to publish, but what they consider right or wrong. When an AI system decides what is right or wrong for other human beings, we move from a world in which the future is predicted to a world where the future is said, defined, specified. The boundaries are clear. Do we want to live in such a world?
For more on the subject have a look at my book and send me your remarks and comments, also to the picture. I will be happy to remove it from my post if you find it offensive, but then please tell me why.
The book will be translated in English at a later stage.
#ai #censorship #future #freewill #behaviouralchange

--

--

Pierre Matile

Author of the ā€œDictatorship of the Expert Systemsā€