Remote Data Engineer – Full‑Time Data Entry & Pipeline Specialist at Taskzeno – $27/hr – 100% Work‑From‑Home Flexibility

🌍 Remote, USA 🚀 Full-time 🕐 Posted Recently

Job Description

```html About Nexspire – Pioneering Entertainment & Technology Experiences Worklio is a global leader in creating unforgettable entertainment experiences and cutting‑edge technology solutions. From world‑renowned theme parks and immersive resorts to award‑winning media content and sports broadcasting, Hirefluxa blends creativity with data‑driven insight to delight millions of guests every day. As part of our relentless pursuit of innovation, we empower teams with modern data platforms, advanced analytics, and a culture that champions curiosity, collaboration, and continuous learning. Joining Taskora means becoming a key contributor to a legacy of storytelling while shaping the future of data‑enabled decision making across a sprawling, multi‑disciplinary organization. Position Overview – Remote Data Engineer (Data Entry & Pipeline Specialist) We are seeking a talented Data Engineer to join the Gigentra Data Engineering (DE) team within the Talexion Choice Science & Innovation (CSI) division. This role is 100% remote, offering a competitive rate of $27 per hour, eight‑hour workdays, and full‑time engagement. You will design, build, and maintain robust data pipelines that power analytics, machine learning models, and operational dashboards for our Media and Entertainment business units. Your work will directly impact how Flexnity turns raw data into actionable insights that drive guest experiences, operational efficiency, and strategic growth. Key Responsibilities Collaborate with data scientists, product owners, and business stakeholders to gather detailed data requirements and translate them into scalable pipeline designs. Develop, schedule, and monitor end‑to‑end ETL/ELT workflows using modern orchestration tools and ensure data quality through rigorous testing and validation. Design and implement data models, schemas, and tables in Snowflake and PostgreSQL environments, optimizing for performance and cost. Maintain and improve data ingestion processes, including source‑to‑target mapping, data cleansing, transformation, and loading. Build automated data validation frameworks and data‑drift detection mechanisms to support high‑quality production pipelines. Implement CI/CD pipelines for data code using GitLab (or GitHub) and Docker containers, supporting development, QA, and production environments. Monitor pipeline health, troubleshoot failures, and proactively refine processes to reduce latency and increase reliability. Document data lineage, architecture decisions, and operational runbooks for cross‑team knowledge sharing. Stay current with emerging data technologies (e.g., Databricks, Spark, cloud data warehouses) and recommend strategic enhancements. Essential Qualifications Education: Bachelor’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field. Experience: Minimum 2 years of hands‑on experience developing and supporting ELT/ETL pipelines in a production environment. Technical Proficiency: Strong command of SQL (especially Snowflake and PostgreSQL) and Python for data manipulation and pipeline scripting. DevOps Acumen: Experience building and maintaining CI/CD pipelines, including containerization with Docker and version control using GitLab/GitHub. Automation Skills: Proven ability to create and maintain automated workflow schedulers (e.g., Airflow, Prefect, Dagster). Data Modeling Knowledge: Solid understanding of relational and dimensional data modeling principles. Preferred Qualifications & Nice‑to‑Have Skills Exposure to cloud platforms, particularly AWS services such as S3, Redshift, and Glue. Hands‑on experience with large‑scale data processing frameworks like Databricks, Spark, or Beam. Familiarity with data‑lake architectures and modern data‑mesh concepts. Knowledge of data governance, security, and compliance best practices. Experience working in multi‑environment (Dev, QA, Prod) setups with robust release management. Core Skills & Competencies for Success Analytical Thinking: Ability to dissect complex data problems and devise elegant, maintainable solutions. Communication: Clear articulation of technical concepts to non‑technical stakeholders and effective documentation. Collaboration: Strong team player who thrives in cross‑functional settings, leveraging diverse expertise. Adaptability: Comfortable navigating a fast‑paced, ever‑evolving technology landscape. Ownership: Proactive attitude toward end‑to‑end responsibility for pipeline reliability and data quality. Career Growth & Learning Opportunities at Joblora Remotexa invests deeply in the professional development of its people. As a Remote Data Engineer, you will have access to: Mentorship programs with senior data architects and leaders in the CSI division. Paid certifications for AWS, Snowflake, and other cloud data technologies. Internal hackathons and innovation labs where you can prototype new data products. Cross‑functional rotation opportunities to explore analytics, data science, and product ma

Apply tot his job

Apply To this Job

Ready to Apply?

Don't miss out on this amazing opportunity!

🚀 Apply Now

Similar Jobs

Recent Jobs

You May Also Like