Researchers use artificial intelligence to identify potential unsafe locations in cities

Researchers from the GIST use artificial intelligence to identify potential unsafe locations in cities
Identifying location-specific attributes is an important aspect of social artificial intelligence. However, models that are frequently trained on subjective perceptions and still images are unreliable in predicting crime. Now, researchers from GIST in Korea take things to the next level by training a neural network with a geotagged dataset of reported deviant incidents and sequential images of deviant locations to accurately determine unsafe locations by linking the deviant behavior to the visual features of a city. Credit: Gwangju Institute of Science and Technology

Identifying possible hotspots of crime in a city is an important issue for urban safety development and can help the authorities take necessary steps to make the city safer for its residents. The effectiveness of such preventive measures depends on the accuracy of the predictions, which are increasingly being made by artificial intelligence (AI)-based models. Most existing models use subjective perceptions of safe locations, socioeconomic status, and still images of crime scenes, and only a few violent crimes are categorized as input data. As a result, there is often a discrepancy between their predictions and reality.

In a new study published in AAAI Conference on Artificial Intelligence, researchers from the Gwangju Institute of Science and Technology (GIST) in South Korea proposed a different strategy based on a large-scale dataset and the concept of "deviance," which included not only but also civil complaints regarding behaviors violating social norms, which is also called "deviant behavior."

Accordingly, they developed a convolutional neural network , aptly called "DevianceNet," and trained it using a geotagged dataset of deviant incident reports with corresponding sequential images of the incident locations acquired using Google street view. "Our work is the first study that investigates the relationship between the physical appearance of a city and deviance with deep learning techniques," comments Associate Professor Hae-Gon Jeon, who headed the study.

The researchers collected the images from 10 GPS coordinates within a radius of 50 m from the site of reported incidents, and, for each GPS location, considered images with 12 directions for a total of 120 images. Using data from five major cities in South Korea and two in the USA, they trained and tested their model with 2250 deviant places and 760,952 images. Such a large dataset enhanced the prediction capabilities of the model to detect possible deviant locations. "This improved visual perception tasks such as recognition, classification, and localization," explains Dr. Jeon. "The holistic representation of DevianceNet extracted from entire image sequences makes it possible to accurately classify and detect deviant places."

Since the model can identify deviant behavior from the visual attributes of the environment, it is not city-specific and can be used to identify potential unsafe locations even when criminal incident data is not available. "This makes it a useful tool in countries that have poor record keeping. The model can also be integrated into navigational services to suggest safer routes," says Dr. Jeon, speaking on the practical implications of the study. "Additionally, can use the results of the prediction to understand how the 's layout or design environment can be redesigned to lower instances of and criminal activity."

More information: DevianceNet: Learning to Predict Deviance from A Large-scale Geo-tagged Dataset, AAAI Conference on Artificial Intelligence:

Provided by Gwangju Institute of Science and Technology
Citation: Researchers use artificial intelligence to identify potential unsafe locations in cities (2022, February 23) retrieved 22 July 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Can machine-learning models overcome biased datasets?


Feedback to editors