Skip to Main Content


Full / Part-time: Full Time
Hours Per Week: 40
Location: 1801 S. 2nd St. McAllen, TX 78503


The Data Engineer I is part of a high-performing Business Analytics and Insights Team that will provide actionable and innovative solutions to address complex and unstructured business questions by leveraging various datasets and advanced statistical techniques.

The Data Engineer will be responsible for architecting and deploying a highly performant data platform that is able to support analytics-related initiatives across the enterprise. They will build complex ETL and ELT pipelines to consolidate data from multiple disparate data sources into the bank’s enterprise data warehouse, and take ownership of the bank’s enterprise data architecture.


The duties listed below may not include all responsibilities that the person in this role may be asked to perform. Incumbent may be required to perform other related duties as assigned.

1. A subject matter expert on the bank’s enterprise data architecture
2. Provides support for the bank’s analytics and data-driven initiatives
3. Translates complex engineering and technical concepts to senior management and non-technical employees to enable understanding and drive informed business decisions.
4. Performs data discovery across the entire enterprise in order to consolidate large amounts of data into a single unified platform
5. Work closely with contractors and external vendors to enhance the bank’s data architecture
6. Full ownership of the data engineering processes and enterprise data architecture from conceptualization through design, development, and deployment
7. Strong individual planning and project management skills, able to juggle multiple tasks and priorities
8. Works closely with the rest of the Shared Services department to ensure that data management processes align with the overall goals of the enterprise
9. Continue to evolve and improve technical skills with SQL, Python, Spark, and other emerging data management technologies.
10. Designs and builds large and complex information sets; Integrates and extracts relevant information from large amounts of both structured and unstructured data (internal and external) to enable analytical solutions.
11. Leads efforts to develop scalable, efficient, automated solutions for large scale data analyses, model development, model validation and model implementation.
12. Provides guidance regarding data management approaches and data pipeline methodologies to team members.
13. Contributes to documentation efforts centered around data governance, data management, and data architecture


These specifications are general guidelines based on the minimum experience normally considered essential to the satisfactory performance of this position. The requirements listed below are representative of the knowledge, skill and/or ability required to perform the position in a satisfactory manner. Individual abilities may result in some deviation from these guidelines.

1. Bachelor's degree in computer science, mathematics, statistics, economics, or other quantitative discipline OR 3+ of related experience beyond the minimum required may be substituted in lieu of a degree.
2. Experience in designing, developing, and deploying custom ETL and/or ELT solutions
3. Ability to translate business requirements into database models, data processing pipelines, application programming interfaces (API), and data tooling
4. Experience with Python is required
5. Understanding of data warehousing technologies such as Azure SQL Data Warehouse, Snowflake, BigQuery, or Redshift
6. Experience with schema design and dimensional data modeling
7. Familiarity with analytics engines such as Apache Spark, Apache Beam, Dask, or Apache Storm
8. Excellent knowledge of relational databases such as Microsoft SQL Server, MySQL, PostgresSQL, Aurora, or MariaDB
9. Strong written, oral, and presentation communication skills with the ability to shape messages and content to audiences of widely varying roles and technical backgrounds


• 2-4 years of experience with demonstrated experience in ETL/ELT development, database administration, software engineering, or analytics
• Experience working in regulated environments (e.g. Financial Services).
• Preferred Experience with any of the following cloud data and analytics technologies:
o Azure – Azure Databricks, Azure Data Lake, Event Hubs, SQL Data Warehouse, Azure Data Factory
o AWS – Elastic Map Reduce, Athena, Redshift, Kinesis, Glue, S3
o GCP – BigQuery, BigTable, Dataflow, DataProc, Google Cloud Storage
• Machine Learning experience with either Azure ML, Spark Mllib, TensorFlow, or similar
• Experience working with notebook environments such as Jupyter or Zeppelin



Why Work at Vantage?

Our People/Our Culture

At Vantage, we have a committed professional team that share a passion for service. We are a team that values relationships and work together for the greater good of our customers. We strive to follow our motto "One Team. One Bank. One Company."

Our Benefits

  • 401(k) Plan Contribution Match
  • Health & Dental Insurance
  • Disability Insurance
  • Bonus & Incentive Pay Programs
  • Life Insurance
  • Vacation and sick paid leave

Our Programs

  • Health & Wellness Programs
  • Training & Career Development Programs
  • Academic Partnerships
  • Mentorship Program
  • Internship Program

More Perks

  • Employee Advisory Committee
  • Active Community Involvement

For additional information, please call the Human Resources department at (956) 664-8485.