Manchester United manager Jose Mourinho has praised Mohamed Salah ahead of the club’s trip to Anfield to face Liverpool on Sunday.Mourinho admits stopping the Egypt international will be a tough task for his team as they head to Merseyside to face Jurgen Klopp’s side and their top four hopes hanging in the balance.Defeat at Anfield would leave United 19 points behind their rivals after just 16 games of the season and the threat of Salah looms large.“I think he has developed incredibly well at every level since Chelsea. Physically an amazing development” Mourinho said, according to Mirror Football.“He was a fast, fragile boy and now he is a fast strong man. He was psychologically not adapted coming direct from a small club in Switzerland like Basel to a big club in England and the Premier League when he signed for Chelsea. It was too much.”Jose Mourinho is sold on Lampard succeeding at Chelsea Tomás Pavel Ibarra Meda – September 14, 2019 Jose Mourinho wanted to give his two cents on Frank Lampard’s odds as the new Chelsea FC manager, he thinks he will succeed.There really…“I remember I played him at White Hart Lane against Tottenham and at The Etihad against Manchester City and it was too much for him.”“Now he plays everywhere against any team. He can go to Barcelona, he can go to Madrid, he plays like “I am Mo-Mo-Salah and I am afraid of no-one and nobody.”“His levels of experience in Italy and coming back to the Premier League , he is a fantastic player, a completely different to the potential player we brought from Basel to Chelsea.““He was a project player, now he is one of the best players in the world. So I don’t think the fact I know him helps me because I knew a project player, and now he is a top player.”“But every match we try to analyse opponents, the strengths and the weaknesses, and when you go to Liverpool they have many strengths, but of course he is nuclear in strength.”
Meredith and David Zinczenko’s Galvanized agency are partnering on the launch of a quarterly magazine built off of the latter’s successful “Eat This, Not That!” healthy-eating franchise.The first issue will be distributed at 80,000 newsstands starting Dec. 2, though Meredith wouldn’t release the total draw. Galvanized will handle editorial, while Meredith is responsible for printing, distribution, sales and marketing. With heavy paper stock, a thick folio and a hefty $13 price tag, the magazine is built on reader revenue, but advertising still plays a small part. Just one of its 120 pages—the back-cover—is an ad, but Meredith says it’s looking to expand on that base in future issues. It’s a model that the publisher has used in the past, according to Tom Harty, president of Meredith’s national media group.”We have produced a number of high quality publications through our Special Interest Media Group on topics from food to home décor to kitchen and bath design,” he says. “Frequently called ‘bookazines,’ these titles typically sell for a higher price point.”The content also meshes well with Meredith’s portfolio of epicurean and healthy-living brands, including Eating Well, allrecipes and Fitness. Meredith’s deal doesn’t extend to Web content or books, however.For Zinczenko and Galvanized, the magazine launch is the first piece of a franchise reboot that will include a new website, mobile apps, digital magazine and two new books.Zinczenko, the former editor-in-chief of Men’s Health and founder of “Eat This, Not That!,” bought the brand from Rodale, his former employer, in September for an undisclosed amount. The franchise has produced 16 books and sold more than 7 million copies since its debut in 2007.
.The first phase of Upazila Parishad elections will be held on 10 March next, reports UNB.Election commission secretary Helal Uddin announced the election schedule at a press conference at the EC secretariat on Sunday.The last date for the submission of nomination papers is 11 February while that of withdrawal on 19 February. The scrutiny of the nomination papers will be held on 12 February.On 10 January, EC secretary Helal Uddin Ahmed said the polls would be held in phases starting from the first week of March.The first Upazila Parishad elections were held in 1985, while the following three were held in 1990, 2009 and 2014. The last one was organised in six phases.Besides, the election commission fixed 4 March for the elections to the JS reserved seats.The last date for the submission of nomination papers is 11 February while that of withdrawal on 16 February. The scrutiny of nomination papers will be held on 12 February.
A union parishad member was killed in what police called a gunfight with them in Teknaf of Cox’s Bazar early Tuesday, reports UNB.The deceased was Hamid Member alias Dakat Hamid, 45, a member from ward-5 of Sadar union parishad and son of late Abul Hashim of Maheshkhalia in the upazila.According to police, Hamid was accused in 12 cases filed on different charges, including human trafficking, murder, and possessing firearms and drugs.Pradip Kumar Das, officer-in-charge of Teknaf model police station, said a police team, led by sub-inspector Sujit Chandra Dey, conducted a drive in Maheshkhalia area around 3:30pm on Monday and arrested Hamid.Around 1:00am, they conducted another drive at Maheshkhalia Noughat along with Hamid to recover firearms and yaba pills, he said.When police reached the spot, Hamid’s cohorts opened fire on them forcing them to retaliate which triggered a gunfight, the OC said, adding that he was caught in the line of fire at one stage.Hamid was taken to Teknaf Health Complex where the physicians referred him to Cox’s Bazar Sadar Hospital for better treatment.However, he succumbed to his injuries around 3:40am.Three policemen were also injured in the gunfight, Pradip claimed, adding that they recovered 4 LGs and 6,000 yaba pills from the spot.
Seeking to foray into fine dining restaurant business at swanky hotels across the country, Indian Railway Catering and Tourism Corporation (IRCTC) is all set to open a speciality restaurant in the ITDC-run Hotel Janpath in the national Capital.“IRCTC is in the final phase of negotiations with ITDC to get some space at Hotel Janpath,” Sandip Dutta, Manager, PR, IRCTC said. The official said IRCTC plans to develop and operate
Facebook has recently been granted a patent titled “Selection and Presentation of News Stories Identifying External Content to Social Networking System Users” on July 31st, 2018. It aims to analyze the user data to curate a personalized news feed for the users. This will also include providing users with control over the kind of news they want to see. Facebook wants to add a Filter option in its news feed. This will make it easier for the users to find relevant news items. As per the patent application, “the news stories may be filtered based on filter criteria allowing a viewing user to more easily identify new stories of interest”. For instance, the filter can be added to view stories associated with either some other user or some news source. You can also add a keyword filter to get all the stories related to that specific keyword. Facebook news feed filter tool There are a lot of groups and pages on Facebook which helps reflect the user’s interests. The kind of content that the user posts also says a lot about his/her preferences. As there is a lot of user data present, Facebook automatically analyzes the user’s profile to optimize the news feed as per the choice of the user. There is also a ranking criterion involved when it comes to filtering news feed. The patent reads “news stories are scored and ranked based on their scores. News stories may be ranked based on the popularity of the news story among users of the social networking system. Popularity may be based on the number of views, likes, comments, shares or individual posts of the news story in the social networking system.” News stories can also be ranked based on the chronological order. Facebook news feed filter tool patent Once Facebook is done analyzing the user profile, filtering the feed based on filter criteria, and ranking the stories based on the ranking criteria, a newly customized news feed will be generated and presented to the user. Facebook has been taking measures to curb fake news from its feed. The news filter tool is expected to help further. It will prevent irrelevant and fake news from occurring on users’ news feed as the users can choose to see news only from trusted resources. In fact, Facebook recently acquired Bloomsbury AI to fight fake news. Additionally, the latest news sources, accounts, groups, and pages will also be recommended to users based on data analyzed. With so much data floating around on Facebook feeds, this patent idea seems like a much-needed one. There are no details currently on when or if this feature will hit the Facebook feed. What do you think about Facebook’s news feed filter tool patent? Let us know in the comments below. Read Next Facebook launched new multiplayer AR games in Messenger Facebook launches a 6-part Machine Learning video series Facebook open sources Fizz, the new generation TLS 1.3 Library
The ethics of artificial intelligence seems to have found its way into just about every corner of public life. From law enforcement to justice, through to recruitment, artificial intelligence is both impacting both the work we do and the way we think. But if you really want to get into the ethics of artificial intelligence you need to go further than the public realm and move into the bedroom. Sex robots have quietly been a topic of conversation for a number of years, but with the rise of artificial intelligence they appear to have found their way into the mainstream – or at least the edges of the mainstream. There’s potentially some squeamishness when thinking about sex robots, but, in fact, if we want to think seriously about the consequences of artificial intelligence – from how it is built to how it impacts the way we interact with each other and other things – sex robots are a great place to begin. Read next: Introducing Deon, a tool for data scientists to add an ethics checklist Sexualizing artificial intelligence It’s easy to get caught up in the image of a sex doll, plastic skinned, impossible breasts and empty eyes, sad and uncanny, but sexualized artificial intelligence can come in many other forms too. Let’s start with sex chatbots. These are, fundamentally, a robotic intelligence that is able to respond to and stimulate a human’s desires. But what’s significant is that they treat the data of sex and sexuality as primarily linguistic – the language people use to describe themselves, their wants, their needs their feelings. The movie Her is a great example of a sexualised chatbot. Of course, the digital assistant doesn’t begin sexualised, but Joaquin Phoenix ends up falling in love with his female-voiced digital assistant through conversation and intimate interaction. The physical aspect of sex is something that only comes later. Ai Furuse – the Japanese sex chatbot But they exist in real life too. The best example out these is Ai Furuse, a virtual girlfriend that interacts with you in an almost human-like manner. Ai Furuse is programmed with a dictionary of more than 30,000 words, and is able to respond to conversational cues. But more importantly, AI Furuse is able to learn from conversations. She can gather information about her interlocutor and, apparently, even identify changes in their mood. The more you converse with the chatbot, the more intimate and closer your relationship should be (in theory). Immediately, we can begin to see some big engineering questions. These are primarily about design, but remember – wherever you begin to think about design we’re starting to move towards the domain of ethics as well. The very process of learning through interaction requires the AI to be programmed in certain ways. It’s a big challenge for engineers to determine what’s really important in these interactions. The need to make judgements on how users behave. The information that’s passed to the chatbot needs to be codified and presented in a way that can be understood and processed. That requires some work in itself. The models of desire on which Ai Furuse are necessarily limited. They bear the marks of the engineers that helped to create ‘her’. It becomes a question of ethics once we start to ask if these models might be normative in some way. Do they limit or encourage certain ways of interacting? Desire algorithms In the context of one chatbot that might not seem like a big deal. But if (or as) the trend moves into the mainstream, we start to enter a world where the very fact of engineering chatbots inadvertently engineers the desires and sexualities that are expressed towards them. In this instance, not only do we shape the algorithms (which is what’s meant to happen), we also allow these ‘desire algorithms’ to shape our desires and wants too. Storing sexuality on the cloud But there’s another more practical issue as well. If the data on which sex chatbots or virtual lovers runs on the cloud, we’re in a situation where the most private aspects of our lives are stored somewhere that could easily be accessed by malicious actors. This a real risk of Ai Furuse, where cloud space is required for your ‘virtual girlfriend’ to ‘evolve’ further. You pay for additional cloud space. It’s not hard to see how this could become a problem in the future. Thousands of sexual and romantic conversations could be easily harvested for nefarious purposes. Sex robots, artificial intelligence and the problem of consent Language, then, is the kernel of sexualised artificial intelligence. Algorithms, when made well, should respond, process, adapt to and then stimulate further desire. But that’s only half the picture. The physical reality of sex robots – both as literal objects, but also the physical effects of what they do – only adds a further complication into the mix. Questions about what desire is – why we have it, what we should do with it – are at the forefront of this debate. If, for example, a paedophile can use a child-like sex robot as a surrogate object of his desires, is that, in fact, an ethical use of artificial intelligence? Here the debate isn’t just about the algorithm, but how it should be deployed. Is the algorithm performing a therapeutic purpose, or is it actually encouraging a form of sexuality that fails to understand the concept of harm and consent? This is an important question in the context of sex robots, but it’s also an important question for the broader ethics of AI. If we can build an AI that is able to do something (ie. automate billions of jobs) should we do it? Who’s responsibility is it to deal with the consequences? The campaign against sex robots These are some of the considerations that inform the perspective of the Campaign Against Sex Robots. On their website, they write: “Over the last decades, an increasing effort from both academia and industry has gone into the development of sex robots – that is, machines in the form of women or children for use as sex objects, substitutes for human partners or prostituted persons. The Campaign Against Sex Robots highlights that these kinds of robots are potentially harmful and will contribute to inequalities in society. We believe that an organized approach against the development of sex robots is necessary in response the numerous articles and campaigns that now promote their development without critically examining their potentially detrimental effect on society.” For the campaign, sex robots pose a risk in that they perpetuate already existing inequalities and forms of exploitation in society. They prevent us from facing up to these inequalities. They argue that it will “reduce human empathy that can only be developed by an experience of mutual relationship.” Consent and context Consent is the crucial problem when it comes to artificial intelligence. And you could say that it points to one of the limitations of artificial intelligence that we often miss – context. Algorithms can’t ever properly understand context. There will, undoubtedly be people who disagree with this. Algorithms can, for example, understand the context of certain words and sentences, right? Well yes, that may be true, but that’s not strictly understanding context. Artificial intelligence algorithms are set a context, one from which they cannot deviate. They can’t, for example, decide that actually encouraging a pedophile to act out their fantasies is wrong. It is programmed to do just that. But the problem isn’t simply with robot consent. There’s also an issue with how we consent to an algorithm in this scenario. As journalist Adam Rogers writes in this article for Wired, published at the start of 2018: “It’s hard to consent if you don’t know to whom or what you’re consenting. The corporation? The other people on the network? The programmer?” Rogers doesn’t go into detail on this insight, but it gets to the crux of the matter when discussing artificial intelligence and sex robots. If sex is typically built on a relationship between people, with established forms of communication that establish both consent and desire, what happens when this becomes literally codified? What happens when these additional layers of engineering and commerce get added on top of basic sexual interaction? Is the problem that we want artificial intelligence to be human? Towards the end of the same piece, Rogers finds a possible solutions from privacy researcher Sarah Jamie Lewis. Lewis wonders whether one of the main problems with sex robots is this need to think in humanoid terms. “We’re already in the realm of devices that look like alien tech. I looked at all the vibrators I own. They’re bright colors. None of them look like a penis that you’d associate with a human. They’re curves and soft shapes.” Of course, this isn’t an immediate solution – sex robots are meant to stimulate sex in its traditional (arguably heteronormative) sense. What Lewis suggests, and Rogers seems to agree with, is really just AI-assisted masturbation. But their insight is still useful. On reflection, there is a very real and urgent question about the way in which we deploy artificial intelligence. We need to think carefully about what we want it to replicate and what we want it to encourage. Sex robots are the starting point for thinking seriously about artificial intelligence It’s worth noting that when discussing algorithms we end up looping back onto ourselves. Sex robots, algorithms, artificial intelligence – they’re a problem insofar as they pose questions about what we really value as humans. They make us ask what we want to do with our time, and how we want to interact with other people. This is perhaps a way forward for anyone that builds or interacts with algorithms. Whether they help you get off, or find your next purchase. Consider what you’re algorithm is doing – what’s it encouraging, storing , processing, substituting. We can’t prepare for a future with artificial intelligence without seriously considering these things.