Generating Algorithms for Hot Spots Policing


Large police departments have come to rely on algorithms to predict where crime will occur, such that they can better allocate resources to communities that need them. While these algorithms have been shown to reduce crime, they are not built to account for historical bias in training data, especially against racial and class minorities. As a result, they run the risk of reinforcing historical prejudice against these already persecuted groups. The aim of team GAHSP is to address these inherent issues with predictive policing while also improving on crime-prediction accuracy. Using modern Machine Learning techniques, better data cleansing/weighting, and algorithm stopgaps such as unfairness penalties, we aim to construct an algorithm which has the benefits of better crime prediction while minimizing bias in ways that past algorithms have not attempted or succeeded at doing.