Job Description
```html About Talvora – Transforming Retail Through Technology At Flexionis , we are reimagining the future of retail by empowering millions of shoppers with seamless, data‑driven experiences. As a global leader in e‑commerce and omnichannel services, we combine cutting‑edge technology with deep industry expertise to deliver rapid, reliable, and personalized solutions at scale. Our mission is simple: use data to make life easier for every customer, every day. Whether you are a seasoned data professional or a rising talent eager to sharpen your skills, Jobtrix provides a collaborative environment where innovative ideas flourish, and every line of code has the potential to impact millions worldwide. We are proud of our remote‑first culture, flexible scheduling, and a commitment to inclusivity that ensures every team member feels valued, heard, and empowered to grow. Role Overview – Remote Data Entry & ETL Pipeline Engineer We are seeking a meticulous, technically adept Data Entry & ETL Pipeline Engineer to join the Supported Hunt Information group at Remotica. This team is responsible for designing, building, and maintaining robust data pipelines, data sets, and ETL processes that power Skillnex’s Sponsored Search Advertising platform. The role offers both night and day shift options, giving you the flexibility to choose a schedule that best fits your lifestyle while working 100 % remotely. Key Responsibilities Design & Develop High‑Volume Data Pipelines: Create resilient, scalable pipelines for ingesting, transforming, and storing massive data feeds used to power advertising analytics on Zenvora.com and affiliated sites. Maintain Data Quality & Accessibility: Implement validation, deduplication, and monitoring mechanisms to ensure data accuracy, completeness, and timely availability for downstream consumers. Leverage Cloud Platforms: Deploy and manage data workloads on Microsoft Azure and Google Cloud, utilizing services such as Azure Data Lake, BigQuery, and Cloud Storage. Containerized Deployment: Package ETL jobs and micro‑services in Docker containers and orchestrate them with Kubernetes for reliable, automated scaling. Collaborate Cross‑Functionally: Partner with product owners, QA engineers, data scientists, and partner operations to translate business requirements into technical solutions. Support Incident Response: Participate in on‑call rotations, troubleshoot production issues, and coordinate rapid resolutions across multiple teams. Continuous Improvement: Identify bottlenecks, propose architectural enhancements, and champion best practices for performance, security, and cost‑efficiency. Documentation & Knowledge Sharing: Produce clear technical documentation, conduct code reviews, and mentor junior engineers to foster a culture of learning. Essential Qualifications Data Engineering Fundamentals: Proven experience designing relational and NoSQL data models (e.g., Azure SQL, PostgreSQL, Cassandra) and implementing ETL workflows. Programming & Scripting Skills: Proficiency in SQL and at least one of Python, Scala, or Java for data manipulation and pipeline development. ETL Tool Expertise: Hands‑on experience with Apache Airflow, Apache NiFi, or similar orchestrators; familiarity with Apache Flink is a plus. Streaming Platforms: Practical knowledge of real‑time messaging systems such as Apache Kafka for event‑driven data pipelines. Container & Orchestration Proficiency: Experience building Docker images and managing workloads in Kubernetes clusters. Cloud Service Acumen: Direct exposure to Azure (e.g., Azure Data Factory, Azure Databricks) and Google Cloud Platform services (e.g., Dataflow, BigQuery). Analytical Mindset: Strong problem‑solving abilities, a data‑first approach to decision making, and meticulous attention to detail. Communication Skills: Ability to articulate technical concepts clearly to both technical and non‑technical stakeholders. Self‑Management: Demonstrated ownership, accountability, and the capacity to thrive in a remote, asynchronous work environment. Preferred Qualifications & Nice‑to‑Have Skills Experience with distributed compute frameworks such as Hadoop, Spark, or Flink. Familiarity with data warehousing solutions like Snowflake, Hive, or Redshift. Knowledge of CI/CD pipelines using tools like GitHub Actions, Jenkins, or Azure DevOps. Prior work on advertising technology platforms, recommendation engines, or marketing analytics. Exposure to infrastructure‑as‑code (IaC) tools such as Terraform or ARM templates. Certifications related to Azure (e.g., Azure Data Engineer Associate) or Google Cloud (e.g., Professional Data Engineer). Core Skills & Competencies for Success Technical Rigor: Ability to write clean, maintainable code and apply software engineering best practices. Scalability Thinking: Design solutions that handle petabyte‑scale data while maintaining low latency. Reliability Focus: Implement robust error handling, retries, and observability (logging, metrics, trac
Apply tot his job
Apply To this Job