Senior Python Developer – Data Ingest
We are seeking an experienced Senior Python Developer to join and expand our Data Ingest Team. In this role, you will be responsible for building and maintaining web scrapers, REST API loaders, SFTP loaders, and other data feeds such as RSS or XML-based sources. You will work with primary external data sources, positioned at the "left side" of our data pipelines, playing a key role in feeding reliable and timely data into our systems. You will help shape the team’s technical direction, establish best practices, and collaborate closely with data engineering, data analysts, and business stakeholders to ensure the ingestion layer meets evolving business needs.
What will be your responsibilities?
Build and maintain robust data ingestion pipelines using Python, including web scrapers, REST API loaders, SFTP clients.
Design and implement reliable extraction workflows from primary external data sources to our PostrgeSQL database.
Build and scale REST API loaders. Create in-house APIs on top of our databases (FastAPI).
Continuously optimize ingestion systems for resiliency, performance, and long-term maintainability, ensuring they scale with increasing data volume and business demands.
Work closely with data engineering, data analysts, and business stakeholders to ensure ingest systems align with downstream processing, storage, and analytical use cases.
Contribute to architectural decisions around deployment, containerization, and scheduling of ingestion jobs (e.g., via Docker, Airflow, or similar).
What do we expect?
Required Skills and Qualifications:
Proven experience with Python in scraping field - requests, curl_cffi, BeautifulSoup, Playwright.
Familiarity with core Python frameworks and libraries such as Pandas, SQLalchemy and FastAPI and other web frameworks.
Solid understanding of HTML, CSS, and JavaScript, with the ability to analyze and parse complex DOM structures.
Strong knowledge of HTTP protocols, request/response flows, headers, and status codes relevant to scraping and API interaction.
Experience consuming external APIs (REST or GraphQL) for data ingestion, including authentication and pagination handling.
Exposure to cloud platforms such as AWS, Azure, or GCP, as well as Docker and PostgreSQL, particularly for deploying and scaling ingestion pipelines.
Practical experience with asynchronous programming in Python using libraries like aiohttp to improve performance and concurrency in data ingestion.
Prior experience in commodity trading, energy, or other data-intensive domains is highly desirable.
Personal Attributes:
Analytical and detail-oriented, with strong problem-solving skills.
Ability to work closely with business stakeholders to ensure solutions meet real-world business requirements.
Good communication skills for both technical and non-technical audiences.
Drive to stay up to date with the latest technologies and innovations in web scraping.
What can you expect from us?
Challenging job in a major Czech energy company.
The opportunity to look under the hood of commodity trading and work with colleagues with sound commodity trading experience.
Fast career growth as the whole company expands.
Possibility of further education (language courses, professional certifications).
Representative office in the center of Prague. Competitive salary and annual bonus.
Classic benefits such as 5 weeks of holidays, pension insurance contribution, company's contribution to cafeteria, multisport card, etc.
Are you interested to know more? Please do not hesitate to contact me.
I am looking forward to our collaboration!