:Responsibilities: - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. - Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. - Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. - Work with data and analytics experts to strive for greater functionality in our data systems. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. Qualifications: - 1+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: \xef\x83\xbc Experience with big data tools: Hadoop, Spark, Kafka, etc. \xef\x83\xbc Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. \xef\x83\xbc Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. \xef\x83\xbc Experience with AWS cloud services: EC2, EMR, RDS, Redshift \xef\x83\xbc Experience with stream-processing systems: Storm, Spark-Streaming, etc. \xef\x83\xbc Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. - Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. - Experience building and optimizing \'big data\' data pipelines, architectures, and data sets. - Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. - Strong analytic skills related to working with unstructured datasets. - Working knowledge of message queuing, stream processing, and highly scalable \'big data\' data stores.