Get startedGet started for free

Case Study: Scooting Around!

1. Case Study: Scooting Around!

We are here - the last lesson of the last chapter. In the past 4 chapters we learned how to use cloud storage with S3, notifications with SNS, image recognition with rekognition, text translation with Translate and NLP with Comprehend. Let's put these skills together to a real-world use case!

2. The Quandary

In the past year, the City has seen an onslaught of scooters on its streets. While many residents enjoy having this fun mode of transportation, the City council is getting pressure from elderly and disabled residents to regulate them better. People often leave them on the streets, blocking the sidewalk.

3. The Quandary

The City Council is debating on whether to pass a policy where the city picks up the scooter blocking the sidewalk. However, since the people that show up to council don't tend to represent the city demographics, the City council asked Sam to look at the GetItDone requests to estimate the volume of scooters the city would have to pick up.

4. The Data

Sam has narrowed the GetItDone dataset to include the images on S3 bucket, a description field, and a case latitude / longitude. As we can see, some of the descriptions are not complaints, but compliments.

5. Final Product

Sam knows that GetItDone does not always receive requests in English. San Diego is a diverse city. So her first task is to translate all the public descriptions to English.

6. Final Product

She knows not every scooter request will actually include the word scooter, so she will use the images from GetItDone to detect a scooter in the image.

7. Final Product

Her hunch is that when a scooter is blocking the sidewalk, the description sentiment will be negative. She will analyze the description for the sentiment.

8. Final Product

But there are definitely happy posts about scooters too.

9. Final Product

If she can filter the cases down to those that involve the scooter blocking the sidewalk, she can count them and provide Council with a rough idea of how many scooters the City would have to pick up. She knows that the best way to do this would be to train a model with multiple marked up descriptions, but since she doesn't have that yet, this will work for a prototype. Let's get Sam on her way!

10. Initialize boto3 service clients.

First Sam needs to initialize all the boto3 service clients. We start with the rekognition client and comprehend client.

11. Initialize boto3 service clients

Followed by the Translate client.

12. Translate all descriptions to English

Next, we translate all the public descriptions to English, to make sure we're working in one language across the board.

13. Translate all descriptions to English

We overwrite the public_description field with the new translation.

14. Detect text sentiment

Next, we iterate over every row of the DataFrame to detect the sentiment of the text in the descriptions.

15. Detect text sentiment

We grab the Sentiment key from the response and store it in the sentiment column.

16. Detect scooter in image

Finally, we run Rekognition label detection against every image that we stored from GetItDone. This will let us see which images contain a scooter.

17. Detect scooter in image

We store that value in the img_scooter column as a 1 if the scooter is detected in the image, and a 0 if not.

18. Final count!

Then, we select all rows where the scooter was detected in the image and the sentiment was negative, storing them in a pickups DataFrame. The number of rows in the DataFrame is how many scooters the City will expect to pick up!

19. Let's practice!

In this lesson, Sam has used computer vision, and natural language processing to translate text, detect sentiment and find scooters in an image. Politicians never have enough though, so she will probably get asked for a few more things. Let's practice!