A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’. Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West approach in the United States, the European Union has taken a much more deliberate and consultative approach to developing its policies. In April 2019, the European Commission released its ‘Ethics Guidelines for Trustworthy AI’, which set out seven key principles for ensuring that AI technologies are ‘human-centric, fair, transparent, and accountable’. The guidelines are non-binding, but they represent a significant step forward in the EU’s efforts to regulate AI. However, the guidelines are not without their critics. Some argue that they are too vague and that they lack enforcement mechanisms. Others argue that they are too narrow in scope and that they fail to address some of the most pressing concerns about AI, such as data privacy and facial recognition. In this report, we take a critical look at the EU’s ‘Ethics Guidelines for Trustworthy AI’. We assess the strengths and weaknesses of the guidelines and make recommendations for how they could be improved.