Two Barrels is hiring a Senior Data Engineer for up to $150,000/year. You will be a traditional company employee. This is a full time 40 hour M-F remote position with company benefits.

As a Senior Data Engineer at Two Barrels, you will work closely with our Software and Analyst teams to manage everything that has to do with data. This role will help get large datasets into the hands of people that need it while also making sure it doesn’t adversely impact other things we have going on in our software ecosystem. There will be a heavy focus on infrastructure, data pipelines and scalable database design – we want someone who can help us improve our current databases and infrastructure and maintain it in a cost-effective manner. This is more than just building and maintaining ETL pipelines. We need innovation, creativity, and solutions that will have a significant impact on how our data is handled and created.


Remote | Spokane – Austin – SLC |


Full Time




  • Focus on data infrastructure. Lead and build out data services/platforms from scratch (using OpenSource tech).
  • Creating and maintaining transparent, bulletproof ETL (extract, transform, and load) pipelines that cleans, transforms, and aggregates unorganized and messy data into databases or data sources.
  • Consume data from roughly 40 different sources
  • Collaborate closely with our Data Analysts to get them the data they need.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Improve existing data models while implementing new business capabilities and integration points.
  • Creating proactive monitoring so we learn about data breakages or inconsistencies right away.
  • Maintaining internal documentation of how the data is housed and transformed.
  • Improve existing data models, and design new ones to meet the needs of data consumers across Two Barrels.
  • Stay current with latest cloud technologies, patterns, and methodologies; share knowledge by clearly articulating results and ideas to fellow engineers, data analysts, and stakeholders.

Minimum Qualifications:

  • Bachelor’s (BA or BS) in computer science, or related field.
  • 2+ years in a full stack development role
  • 4+ years of experience working in a data engineer role, or related position.
  • 2+ years of experience standing up and maintaining a Redshift warehouse
  • 4+ years of experience with Postgres, specifically with RDS.
  • 4+ years of AWS experience, specifically S3, Glue, IAM, EC2, DDB, and other related data solutions.
  • Experience working with Redshift, DBT, Snowflake, Apache Airflow, Azure Data Warehouse, or other industry standard big data or ETL related technologies.
  • Experience working with both analytical and transactional databases.
  • Advanced working SQL (Preferably PostgreSQL) knowledge and experience working with relational databases.
  • Experience with Grafana or other monitoring/charting systems.
  • Proven knowledge of data platforms and tangible examples of designing and developing complex data pipelines to support better decision making in the business.
  • Experience hosting and operating data platforms in cost efficient manner.
  • Ability to translate business requirements into non-technical, lay terms.
  • Proven expertise with data architecture design and deployment, data modeling, and database development.

Preferred Qualifications:

  • Professional experience with Kafka

Why you might like this job:

  • You are a software engineer that happens to love data, but you still want to code!
  • You are eager to ask questions and find out the “why”.
  • You want to be a part of a team making some really cool software.
  • You are driven to creatively approach solutions based on clients needs.
  • You love open source technology and feel passionate about companies and developers who make decisions based on data.
  • You want your work to have a long-term, meaningful purpose.
  • #BI-Remote