3+ years of professional experience working with web scraping or data collection at scale. Strong proficiency in Python and common scraping libraries/frameworks (Selenium, Playwright, BeautifulSoup, Scrapy, or similar). Solid understanding of HTML, CSS, JavaScript, HTTP, and browser behavior. Experience building automated, production-grade workflows (orchestrators/schedulers like Airflow, Prefect, Dagster, or similar). Experience building ETL/ELT pipelines and integrating with databases/storage. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Strong experience with logging, monitoring, and alerting. Experience with containers (Docker). Familiarity with CI/CD workflows. Exposure to LLMs for parsing, information extraction, or automation.