umdio / Umdio
Programming Languages
Labels
Projects that are alternatives of or similar to Umdio
UMD.io ·
UMD.io is an open API for the University of Maryland. The main purpose is to give developers easy access to data to build great applications. In turn, developers can improve the University of Maryland with the things they build.
Features
Easy API access to
- Three years of course data
- Live Bus data, through NextBus
- Campus Building names and locations
- Basic info about all Majors
Getting Started
To use the api, please refer to our documentation.
Development
To work on umd.io, or to run your own instance, start by forking and cloning this repo.
Setting Up Your Environment With Docker
- Install docker
- Install docker-compose
- Run
docker-compose up
- You might need to run docker-related commands with
sudo
if you're a linux user
- You might need to run docker-related commands with
- Run the scrapers
./umdio.sh scrape
- You might need to
chmod +x umdio.sh
- You might need to
This will take some time, so in the meantime, review the rest of the guide.
Documentation
Within the codebase, comments and good practices are encouraged, and will later be enforced.
For the public-facing API, we use OpenAPI v3 to document everything. You can view our spec here. The docs are served with ReDoc and are automatically built on every tagged commit.
If you're actively working on the documentation, use the docker-compose-dev.yml
file to view your changes live in ReDoc.
Tech Stack
umd.io runs on Ruby, with various libraries such as Rack, Sinatra, Puma, and Sequel. We use Postgresql as the database. Everything runs in docker.
Adding new data
If you're interested in adding a new endpoint, here's a rough guide on how to do it. Our data for majors
is a great, simple example.
- Create a model in
/app/models
. We use Sequel on top of Postgres. It should include ato_v1
method that translates whatever is in your table into the object you want to return. - Create a scraper in
/app/scrapers
. This is to populate the table for the model you just created.- If you're scraping a live webpage,
courses_scraper.rb
might be a good resource. We use nokogiri to parse HTML. - If you're parsing a JSON file, consider adding it to umdio-data, and creating an importer, such as
map_scraper.rb
. (NOTE: umdio-data is now included as a submodule; so this scraper should be updated)
- If you're scraping a live webpage,
- Create a controller in
/app/controllers
. Add endpoints as you see fit. - Register the controller in
server.rb
. - Write documentation in
openapi.yaml
Logging
We use Ruby's built-in logger to output messages to standard output. Learn more about Ruby's logging module
Here's an example of output from the courses scraper:
[2018-10-18 01:35:01] INFO (courses_scraper): Searching for courses in term 201801
[2018-10-18 01:35:02] INFO (courses_scraper): 178 department/semesters so far
[2018-10-18 01:35:02] INFO (courses_scraper): Searching for courses in term 201805
[2018-10-18 01:35:03] INFO (courses_scraper): 301 department/semesters so far
The formatting for outputted messages is as follows:[DATE TIME] LOG_LEVEL (PROGRAM_NAME): {MESSAGE}
An example of a log call in ruby:
logger.info(prog_name) {"MESSAGE"}
You should use Ruby's built-in log-levels where appropriate, when displaying errors you should use logger.error, when displaying information you should use logger.info, and so on.
Our logger implementation is located at the scraper_common.rb
file located at $app/scraper_common.rb
Testing
We use rspec to test. You can find the tests in the tests
directory. Run tests with ./umdio.sh test
.
Credits
See contributors
License
We use the MIT License.