WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

weston273/web-scrapper

Repository files navigation

Web Scraping Project: VacancyMail Job Listings This project involves scraping job listings from VacancyMail, a popular job board in Zimbabwe. The scraper collects job details such as job title, company name, location, expiration date, and job type, and saves them to a CSV file.

Table of Contents Project Overview

Features

Installation

Usage

Scheduling

Contributing

License

Project Overview The VacancyMail Scraper collects job listings from VacancyMail's job board using BeautifulSoup4 and requests libraries. The scraper is designed to extract relevant job details from the webpage and store them in a structured CSV format for easy analysis.

The scraper is scheduled to run daily to gather the most recent job listings.

Features Job Title: The title of the job listing.

Company Name: The company offering the job.

Location: The location of the job.

Expiration Date: The date the job listing expires.

Job Type: The type of the job (Full Time, Part Time, etc.).

Posted Time: The time when the job listing was posted.

The scraper is capable of handling various job categories and types and can be customized for future improvements.

Installation Prerequisites: Python 3.x: Ensure that Python is installed on your machine.

pip: Package manager to install Python dependencies.

Steps to Install: Clone the repository or download the project files.

Navigate to your project folder in the terminal/command prompt.

Install required Python libraries:

bash Copy Edit pip install beautifulsoup4 requests Usage Run the Scraper Manually: To scrape the job listings, run the following command in the terminal:

bash Copy Edit python web_scraper.py CSV Output: After running the scraper, a scraped_data.csv file will be generated containing the collected job listings.

Scheduling This scraper can be scheduled to run automatically every day using Task Scheduler on Windows or Cron on Linux/macOS.

Windows (using .bat file): Create a .bat file to execute the scraper daily. Example scrape_jobs.bat:

bash Copy Edit python C:\path\to\web_scraper.py Use Task Scheduler to run the .bat file daily.

Linux/macOS (using Cron): Open the crontab file:

bash Copy Edit crontab -e Add a new cron job to run the scraper daily at a specific time. Example: to run it at 8 AM every day:

bash Copy Edit 0 8 * * * /usr/bin/python3 /path/to/web_scraper.py Contributing If you'd like to contribute to this project, feel free to fork the repository, make improvements, and submit a pull request.

License This project is licensed under the MIT License - see the LICENSE file for details.

Notes: You can customize the Installation and Usage sections based on your project setup.

Add any specific instructions related to any other configurations (like the Task Scheduler or Cron) if necessary.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published