Lakehouse solution in AWS

My understanding of difference of datalake and lakehouse is lakehouse has a data warehouse component, its main data transformations happen on the data warehouse, while the datalake is purely using object storage, and transformations happen on spark cluster.

The dbt is a very good tool if the backend is a database or data warehouse, and transformations can be done by SQL statements, dbt supports most of traditional databases, also the data warehouses like redshift, snowflake, for the big data data warehouses like hive and databricks

The lakehouse stack on AWS:

Data storage: S3
Ingestion: Glue catalog(streaming, jdbc or s3 files) + Glue job
Transformation: dbt
Datawarehouse: Redshift or Redshift spectrum
Data catalog: Glue catalog
Job scheduling/Automation: Airflow(using bashoperators or generic aws operators)