-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Hi Trino team,
I am currently upgrading from trino version 432 to the latest version 474 (I know it's a big jump), and everything seems to work, apart from on thing:
The CREATE OR REPLACE MATERIALIZED VIEW
statement seems to break an existing materialized view.
Steps to reproduce (trino version 474, using the iceberg connector, with the legacy file system enabled, schema is 11111111_1111_1111_1111_111111111111
):
> CREATE TABLE orders (orderkey bigint, orderstatus varchar);
CREATE TABLE
> insert into orders values (1, 'foo');
> CREATE MATERIALIZED VIEW view_of_orders as select * from orders;
CREATE MATERIALIZED VIEW
> select * from view_of_orders;
orderkey | orderstatus
----------+-------------
1 | foo
(1 row)
> CREATE OR REPLACE MATERIALIZED VIEW view_of_orders as select * from orders;
CREATE MATERIALIZED VIEW
> select * from view_of_orders;
Query 20250417_104518_00012_9jdk2 failed: Metadata not found in metadata location for table 11111111_1111_1111_1111_111111111111.view_of_orders$materialized_view_storage
io.trino.spi.TrinoException: Metadata not found in metadata location for table 11111111_1111_1111_1111_111111111111.view_of_orders$materialized_view_storage
at io.trino.plugin.iceberg.IcebergExceptions.translateMetadataException(IcebergExceptions.java:48)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.refreshFromMetadataLocation(AbstractIcebergTableOperations.java:274)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.refreshFromMetadataLocation(AbstractIcebergTableOperations.java:241)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.refresh(AbstractIcebergTableOperations.java:141)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.current(AbstractIcebergTableOperations.java:124)
at io.trino.plugin.iceberg.catalog.hms.TrinoHiveCatalog.lambda$loadTable$15(TrinoHiveCatalog.java:480)
at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4903)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3574)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2190)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2080)
at com.google.common.cache.LocalCache.get(LocalCache.java:4017)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4898)
at io.trino.cache.EvictableCache.get(EvictableCache.java:119)
at io.trino.cache.CacheUtils.uncheckedCacheGet(CacheUtils.java:39)
at io.trino.plugin.iceberg.catalog.hms.TrinoHiveCatalog.loadTable(TrinoHiveCatalog.java:477)
at io.trino.plugin.iceberg.IcebergMetadata.getMaterializedViewFreshness(IcebergMetadata.java:3826)
at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorMetadata.getMaterializedViewFreshness(ClassLoaderSafeConnectorMetadata.java:1160)
at io.trino.tracing.TracingConnectorMetadata.getMaterializedViewFreshness(TracingConnectorMetadata.java:1322)
at io.trino.metadata.MetadataManager.getMaterializedViewFreshness(MetadataManager.java:1865)
at io.trino.tracing.TracingMetadata.getMaterializedViewFreshness(TracingMetadata.java:1451)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.isMaterializedViewSufficientlyFresh(StatementAnalyzer.java:2336)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitTable(StatementAnalyzer.java:2264)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitTable(StatementAnalyzer.java:522)
at io.trino.sql.tree.Table.accept(Table.java:60)
at io.trino.sql.tree.AstVisitor.process(AstVisitor.java:27)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:541)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.analyzeFrom(StatementAnalyzer.java:4920)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuerySpecification(StatementAnalyzer.java:3096)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuerySpecification(StatementAnalyzer.java:522)
at io.trino.sql.tree.QuerySpecification.accept(QuerySpecification.java:155)
at io.trino.sql.tree.AstVisitor.process(AstVisitor.java:27)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:541)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:549)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuery(StatementAnalyzer.java:1564)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.visitQuery(StatementAnalyzer.java:522)
at io.trino.sql.tree.Query.accept(Query.java:130)
at io.trino.sql.tree.AstVisitor.process(AstVisitor.java:27)
at io.trino.sql.analyzer.StatementAnalyzer$Visitor.process(StatementAnalyzer.java:541)
at io.trino.sql.analyzer.StatementAnalyzer.analyze(StatementAnalyzer.java:501)
at io.trino.sql.analyzer.StatementAnalyzer.analyze(StatementAnalyzer.java:490)
at io.trino.sql.analyzer.Analyzer.analyze(Analyzer.java:98)
at io.trino.sql.analyzer.Analyzer.analyze(Analyzer.java:87)
at io.trino.execution.SqlQueryExecution.analyze(SqlQueryExecution.java:289)
at io.trino.execution.SqlQueryExecution.<init>(SqlQueryExecution.java:222)
at io.trino.execution.SqlQueryExecution$SqlQueryExecutionFactory.createQueryExecution(SqlQueryExecution.java:892)
at io.trino.dispatcher.LocalDispatchQueryFactory.lambda$createDispatchQuery$0(LocalDispatchQueryFactory.java:158)
at io.trino.$gen.Trino_474____20250417_103925_2.call(Unknown Source)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1575)
Caused by: org.apache.iceberg.exceptions.NotFoundException: Failed to open input stream for file: s3a://dev-11111111-us-east-1-data-lake/data_lake/misc/view_of_orders/metadata/00000-31f63ace-1d41-4835-bd54-c16ab17d4233.metadata.json
at io.trino.plugin.iceberg.fileio.ForwardingInputFile.newStream(ForwardingInputFile.java:55)
at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:286)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.lambda$refreshFromMetadataLocation$1(AbstractIcebergTableOperations.java:243)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.lambda$refreshFromMetadataLocation$3(AbstractIcebergTableOperations.java:268)
at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243)
at dev.failsafe.Functions.lambda$get$0(Functions.java:46)
at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74)
at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187)
at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376)
at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112)
at io.trino.plugin.iceberg.catalog.AbstractIcebergTableOperations.refreshFromMetadataLocation(AbstractIcebergTableOperations.java:268)
... 52 more
Caused by: java.io.FileNotFoundException: s3a://dev-11111111-us-east-1-data-lake/data_lake/misc/view_of_orders/metadata/00000-31f63ace-1d41-4835-bd54-c16ab17d4233.metadata.json
at io.trino.filesystem.memory.MemoryFileSystemCache.handleException(MemoryFileSystemCache.java:192)
at io.trino.filesystem.memory.MemoryFileSystemCache.getOrLoadFromCache(MemoryFileSystemCache.java:172)
at io.trino.filesystem.memory.MemoryFileSystemCache.cacheStream(MemoryFileSystemCache.java:92)
at io.trino.filesystem.tracing.TracingFileSystemCache.lambda$cacheStream$1(TracingFileSystemCache.java:66)
at io.trino.filesystem.tracing.Tracing.withTracing(Tracing.java:51)
at io.trino.filesystem.tracing.TracingFileSystemCache.cacheStream(TracingFileSystemCache.java:66)
at io.trino.filesystem.cache.CacheInputFile.newStream(CacheInputFile.java:63)
at io.trino.plugin.iceberg.fileio.ForwardingInputFile.newStream(ForwardingInputFile.java:52)
... 62 more
Caused by: java.io.FileNotFoundException: s3a://dev-11111111-us-east-1-data-lake/data_lake/misc/view_of_orders/metadata/00000-31f63ace-1d41-4835-bd54-c16ab17d4233.metadata.json
at io.trino.filesystem.hdfs.HdfsInputFile.loadFileStatus(HdfsInputFile.java:149)
at io.trino.filesystem.hdfs.HdfsInputFile.length(HdfsInputFile.java:81)
at io.trino.filesystem.tracing.Tracing.withTracing(Tracing.java:51)
at io.trino.filesystem.tracing.TracingInputFile.length(TracingInputFile.java:81)
at io.trino.filesystem.memory.MemoryFileSystemCache.load(MemoryFileSystemCache.java:179)
at io.trino.filesystem.memory.MemoryFileSystemCache.lambda$getOrLoadFromCache$3(MemoryFileSystemCache.java:169)
at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4903)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3574)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2190)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2080)
at com.google.common.cache.LocalCache.get(LocalCache.java:4017)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4898)
at io.trino.cache.EvictableCache.get(EvictableCache.java:119)
at io.trino.filesystem.memory.MemoryFileSystemCache.getOrLoadFromCache(MemoryFileSystemCache.java:169)
... 68 more
Caused by: java.io.FileNotFoundException: File does not exist: s3a://dev-11111111-us-east-1-data-lake/data_lake/misc/view_of_orders/metadata/00000-31f63ace-1d41-4835-bd54-c16ab17d4233.metadata.json
at io.trino.hdfs.s3.TrinoS3FileSystem.getFileStatus(TrinoS3FileSystem.java:521)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
at io.trino.filesystem.hdfs.HdfsInputFile.lambda$loadFileStatus$2(HdfsInputFile.java:140)
at io.trino.hdfs.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:27)
at io.trino.hdfs.HdfsEnvironment.doAs(HdfsEnvironment.java:134)
at io.trino.filesystem.hdfs.HdfsInputFile.loadFileStatus(HdfsInputFile.java:140)
... 82 more
jvm.config
:
-server
-Xmx16G
-XX:+UseG1GC
-XX:InitialRAMPercentage=80
-XX:MaxRAMPercentage=80
-XX:G1HeapRegionSize=32M
-XX:+ExplicitGCInvokesConcurrent
-XX:+ExitOnOutOfMemoryError
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-XX:ReservedCodeCacheSize=512M
-XX:PerMethodRecompilationCutoff=10000
-XX:PerBytecodeRecompilationCutoff=10000
-Djdk.attach.allowAttachSelf=true
-Djdk.nio.maxCachedBufferSize=2000000
-Dfile.encoding=UTF-8
# Allow loading dynamic agent used by JOL
-XX:+EnableDynamicAgentLoading
iceberg.properties
:
connector.name=iceberg
hive.metastore.uri=thrift://trino-metastore:9083
iceberg.file-format=PARQUET
iceberg.compression-codec=ZSTD
iceberg.format-version=2
iceberg.unique-table-location=false
iceberg.max-partitions-per-writer=1000
fs.hadoop.enabled=true
hive.s3.max-connections=500
hive.s3.ssl.enabled=false
hive.s3.endpoint=minio:9000
hive.s3.path-style-access=true
hive.s3.security-mapping.config-file=http://internal-api:3000/configuration/s3mappings?x-tenant=solution
hive.s3.security-mapping.json-pointer=
hive.s3.security-mapping.refresh-period=60s
hive.config.resources=/opt/trino-server/etc/core-site.xml
It was working fine before the upgrade.
Can you tell me whether there might be a setting I'm missing, or can you test it on your systems and see if it is a bug maybe?
I couldn't see a test in https://github.com/trinodb/trino/blob/master/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMaterializedView.java that is testing the REPLACE.