OBJECTIVES
- Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes.
- Help streamline our data science workflows, adding value to our product offering and building out our customer lifecycle and retention models.
- Work closely with the data analyst teams to develop data models and pipelines for research, reporting
JOB DESCRIPTION
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS Big Data technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
1. Knowledge
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases including Postgres, SQL Server, Oracle, MySQL, Solr, Elasticsearch, Cassandra.
- Experience with data pipeline and workflow management tools: Luigi, Airflow, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, Kafka, etc.
- Having knowledge in the financial market, or the stock market will be a plus (but don‘t worry if you haven‘t, you can learn all about how to be a successful stock investor after joining us!).
2. Skills
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
BENEFITS
- Community of people who do their jobs with integrity and dedication to service
- Professional financial working environment.
- Work with a spirit of mastery, creativity and challenge
- Performance-based compensation
- Competitive Salary based on performance and contribution value
- 100% salary during probation and annual salary review
- Join health insurance, medical insurance, accident insurance 24/24
- Institutional cultures and development education
- Open working space with modern equipment
- Rich and vibrant cultural and volunteer activities
![]() |
![]() |