Plan 2: Get a soup from multiple URLs

Plan 2: Example

Sometimes we want to get information from multiple web pages that have the same layout. For example, all of the UMSI faculty pages have the same general design.

Plan 3 outline Plan 3 outline

We are interested in getting information about mutliple UMSI professors: Dr. Barb Ericson, Dr. Steve Oney, and Dr. Paul Resnick.

Their webpages are:

https://www.si.umich.edu/people/barbara-ericson

https://www.si.umich.edu/people/steve-oney

https://www.si.umich.edu/people/paul-resnick

In this code, we get a soup from multiple UMSI faculty pages.

Goal: Get a soup from multiple webpages
# Load libraries for web scraping
from bs4 import BeautifulSoup
import requests
# Get a soup from multiple URLs
base_url = 'https://www.si.umich.edu/people/'
endings = ['barbara-ericson', 'steve-oney', 'paul-resnick']
for ending in endings:
    url = base_url + ending
    r = requests.get(url)
    soup = BeautifulSoup(r.content, 'html.parser')

Plan 2: When to use this plan

Use this plan when you want to scrape the same thing from multiple webpages.

Plan 2: How to use this plan

Look at the webpages you want to scrape and determine which parts they have in common, and which parts are different. The parts that they have in common are the base_url. The parts that are different are the endings.

Plan 2: Exercises

If you want to also get the link to the most recent news item from Dean Tom Finholt’s page, how would you change the code below? Dean Finholt’s web page is https://www.si.umich.edu/people/thomas-finholt.

Change the code and run it to see if you’re right!

You have attempted of activities on this page