23.3.0-11
Updated 09/24/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-94468 | Backup/DR | If a user restores a backup to a cluster whose communal location path contains only a bucket name and ends without a slash, Vertica will give the wrong metadata path when it is needed. This issue has been resolved. 如果用户将备份还原到集群,而该集群的公共位置路径仅包含存储桶名称且结尾没有斜杠,则 Vertica 将在需要时提供错误的元数据路径。 此问题已解决。 |
| VER-95106 | Optimizer | If ARGMAX_AGG and DISTINCT were both used in a query, an internal error was raised. This issue has been resolved. Now, this case raises an unsupported error message that includes a hint on how to rework the SQL query to avoid the error. 如果在查询中同时使用 ARGMAX_AGG 和 DISTINCT,则会引发内部错误。 此问题已得到解决。 现在,这种情况会引发不受支持的错误消息,其中包含有关如何重新执行 SQL 查询以避免错误的提示。 |
| VER-95111 | Execution Engine | When a hash join on unique keys would spill, the value columns would sometimes have alignment issues between how the hash table was written and how it gets read by the spill code. If these value columns were string types, this could lead to a crash. This alignment issue has been resolved. 当唯一键上的哈希连接溢出时,值列有时会在哈希表的写入方式和溢出代码的读取方式之间出现对齐问题。 如果这些值列是字符串类型,则可能会导致崩溃。 此对齐问题已解决。 |
| VER-95197 | Optimizer | Under certain circumstances, partition statistics could be used in place of full table statistics, leading to suboptimal plans. This issue has been resolved. 在某些情况下,分区统计信息可能会代替全表统计信息,从而导致计划不理想。 此问题已解决。 |
| VER-95251 | Optimizer | FKPK Joins over projections with derived expressions would put PK input on the Inner side, even when it was much bigger than FK input, which resulted in worse performance in some scenarios. This issue has been resolved. 使用派生表达式对投影进行 FKPK 连接会将 PK 输入放在内侧,即使它比 FK 输入大得多,这在某些情况下会导致性能下降。 此问题已解决。 |
| VER-95552 | Execution Engine | An issue that caused a crash while using WITHIN GROUP () function with listagg has been resolved. 已解决使用 WITHIN GROUP () 函数与 listagg 时导致崩溃的问题。 |
| VER-95664 | Execution Engine | Due to a bug in the numeric division code, users would get a wrong result when evaluating the mod operator on some numeric values with large precision. This issue has been resolved. 由于数字除法代码中存在错误,用户在对某些精度较高的数值进行 mod 运算符求值时会得到错误的结果。 此问题已解决。 |
| VER-95822 | Execution Engine | An error in expression analysis for REGEXP_SUBSTR would sometimes lead to a crash when that function was in the join condition. This error has been resolved. REGEXP_SUBSTR 表达式分析中的错误有时会导致该函数处于连接条件时崩溃。 此错误已解决。 |
| VER-95965 | Machine Learning | There was a corner case that an orphan blob may remain in a session when the training of an ML model is cancelled. This orphan blob could cause a crash if there was an attempt to train a model with the same name on the same session. This issue is now resolved. 有一种特殊情况,即当取消 ML 模型的训练时,会话中可能会残留一个孤立 blob。 如果尝试在同一会话中训练同名模型,此孤立 blob 可能会导致崩溃。 此问题现已解决。 |
| VER-96254 | EON | Previously, in certain cases when a cancel occurred during Vertica uploads to the communal storage, the node would crash. This issue has now been resolved. 以前,在某些情况下,当 Vertica 上传到公共存储期间发生取消时,节点会崩溃。 此问题现已解决。 |
23.3.0-10
Updated 06/07/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-93195 | Data load / COPY | When the Avro parser would read a byte array that is at most 8 bytes long into a numeric-typed target, it would only accept a single-word numeric as the target type. This has been resolved; now, the Avro parser supports reading short byte arrays into multi-word numeric targets. 当 Avro 解析器将长度最多为 8 个字节的字节数组读入数字类型的目标时,它只接受单字数字作为目标类型。此问题已得到解决; 现在,Avro 解析器支持将短字节数组读入多字数字目标。 |
| VER-93327 | Execution Engine | User-Defined Aggregates didn’t work with single distinct built-in aggregate in the same query when the input wasn’t sorted on grouping columns plus distinct aggregate column. The issue has been resolved. 当输入未按分组列和不同的聚合列排序时,用户定义聚合无法与同一查询中的单个不同的内置聚合一起使用。此问题已解决。 |
| VER-93447 | Backup/DR | LocalStorageLocator did not implement the construct_new() method. When called, it fell back to the StorageLocation.construct_new() method, which raised an error. This issue has been resolved. LocalStorageLocator.construct_new() is now implemented. LocalStorageLocator 未实现 construct_new() 方法。调用时,它会回退到 StorageLocation.construct_new() 方法,从而引发错误。 此问题已解决。LocalStorageLocator.construct_new() 现已实现。 |
| VER-93798 | Optimizer | In version 23.3.0-9, queries that reused views containing WITH clauses would sometimes fail after several executions of the same query. This issue has been resolved. 在版本 23.3.0-9 中,重用包含 WITH 子句的视图的查询有时会在多次执行同一查询后失败。 此问题已得到解决。 |
| VER-93926 | Execution Engine | Whether LIKE ANY / ALL read strings as UTF8 character sequences or binary byte arrays depended on whether the collation of the current locale was binary, leading to incorrect results when reading multi-character UTF8 strings in binary-collated locales. This has been resolved. Now, LIKE ANY / ALL always reads UTF8 character sequences, regardless of the current locale’s collation. LIKE ANY / ALL 是否将字符串读取为 UTF8 字符序列或二进制字节数组取决于当前语言环境的排序规则是否为二进制, 这导致在二进制排序语言环境中读取多字符 UTF8 字符串时出现错误结果。 此问题已解决。现在,LIKE ANY / ALL 始终读取 UTF8 字符序列,而不管当前语言环境的排序规则如何。 |
| VER-93935 | Client Drivers - ODBC | The Windows DSN configuration utility no longer sets vertica as the default KerberosServiceName value when editing a DSN. Starting with version 11.1, providing a value causes the ODBC driver to assume the connection is using Kerberos authentication and communicates to the server that it prefers to use that authentication method, assuming that the user has a grant to a Kerberos authentication method. The KerberosServiceName value might be set in earlier versions of Windows ODBC DSNs. Clearing the value will resolve the issue. This issue only applies to users who have a Kerberos authentication method granted with a lower priority than other authentication methods and use the DSN configuration utility to set up a DSN on Windows. 编辑 DSN 时,Windows DSN 配置实用程序不再将 vertica 设置为默认的 KerberosServiceName 值。 从版本 11.1 开始,提供一个值会导致 ODBC 驱动程序假定连接正在使用 Kerberos 身份验证,并向服务器传达它更喜欢使用该身份验证方法的信息, 假定用户已获得 Kerberos 身份验证方法的授权。 KerberosServiceName 值可能在早期版本的 Windows ODBC DSN 中设置。 清除该值将解决此问题。 此问题仅适用于具有比其他身份验证方法优先级更低的 Kerberos 身份验证方法并使用 DSN 配置实用程序在 Windows 上设置 DSN 的用户。 |
| VER-94034 | ComplexTypes, Kafka Integration | Loading JSON/Avro data with Kafka and Flex parsers into tables with many columns suffered from performance degradation. The performance issue has been resolved. 使用 Kafka 和 Flex 解析器将 JSON/Avro 数据加载到具有许多列的表中会导致性能下降。 此性能问题已得到解决。 |
| VER-94330 | Kafka Integration | The vkconfig --refresh-interval option now functions properly. Setting it to one hour will refresh the lane worker every hour. vkconfig --refresh-interval 选项现在可正常运行。 将其设置为一小时将每小时刷新一次通道工作器。 |
| VER-94333 | Optimizer | NOT LIKE ANY and NOT LIKE ALL are consistent with PostgreSQL now - the phrases LIKE, ILIKE, NOT LIKE, and NOT ILIKE are generally treated as operators in PostgreSQL syntax. NOT LIKE ANY 和 NOT LIKE ALL 现在与 PostgreSQL 一致 - 短语 LIKE、ILIKE、NOT LIKE 和 NOT ILIKE 在 PostgreSQL 语法中通常被视为运算符。 |
| VER-94572 | Data load / COPY | In rare cases, copying a JSON to a table using FJsonParser or KafkaJsonParser could cause the server to go down. This issue has been fixed. 在极少数情况下,使用 FJsonParser 或 KafkaJsonParser 将 JSON 复制到表可能会导致服务器瘫痪。 此问题已修复。 |
| VER-94599 | FlexTable | The copy of multiple json files to Vertica table using fjsonparser() is successful now, which was causing the initiator node down issue before this fix. 现在可以使用 fjsonparser() 将多个 json 文件复制到 Vertica 表,而在此修复之前,这会导致启动器节点出现故障。 |
23.3.0-9
Updated 04/11/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-91668 | ResourceManager | If the default resource pool, defined by the DefaultResourcePoolForUsers configuration parameter, was set to a value other than ‘general’, the user’s view incorrectly reported the non-general resource pool as the default pool when the user didn’t have that non-general pool set in the profile. This issue has been resolved. The default pool in such cases is now correctly reported as ‘general’. 如果由 DefaultResourcePoolForUsers 配置参数定义的默认资源池设置为“general”以外的值,则当用户未在配置文件中设置非通用资源池时, 用户视图会错误地将非通用资源池报告为默认池。此问题已解决。现在,此类情况下的默认池会正确报告为“general”。 |
| VER-91794 | Execution Engine | In rare situations, a logic error in the execution engine “ABuffer” operator would lead to buffer overruns resulting in undefined behavior. This issue has been fixed. 在极少数情况下,执行引擎“ABuffer”运算符中的逻辑错误会导致缓冲区溢出,从而导致未定义的行为。此问题已得到修复。 |
| VER-92114 | Catalog Engine | Previously, syslog notifiers could cause the node to go down when attached to certain DC tables. This issue has been resolved. 以前,系统日志通知程序在连接到某些 DC 表时可能会导致节点关闭。此问题已得到解决。 |
| VER-92125 | Optimizer | Queries using the same views repeatedly would sometimes return errors if those views included WITH clauses. The issue has been resolved. 如果这些视图包含 WITH 子句,则重复使用相同视图的查询有时会返回错误。此问题已解决。 |
| VER-92166 | Procedural Languages | Previously, running certain types of queries inside a stored procedure could cause the database to go down. This has been fixed. 以前,在存储过程内运行某些类型的查询可能会导致数据库崩溃。此问题已得到修复。 |
| VER-92288 | Sessions | The ALTER USER statement could not set the idle timeout for a user to the default value, which is defined by the DefaultIdleSessionTimeout configuration parameter. If the empty string was specified, the idle timeout was set to unlimited. This issue has been resolved. You can now set the idle timeout to the DefaultIdleSessionTimeout value by specifying ‘default’ in the ALTER USER statement. ALTER USER 语句无法将用户的空闲超时设置为默认值,该值由 DefaultIdleSessionTimeout 配置参数定义。 如果指定了空字符串,则空闲超时将设置为无限制。此问题已解决。 现在,您可以通过在 ALTER USER 语句中指定“default”将空闲超时设置为 DefaultIdleSessionTimeout 值。 |
| VER-92660 | UI - Management Console | When you upgraded the Management Console from version 12.0.4 to 23.3.0, log in to the Management Console failed and an error message displayed. This issue has been resolved. 将管理控制台从版本 12.0.4 升级到 23.3.0 时,登录管理控制台失败并显示错误消息。此问题已解决。 |
| VER-92677 | Scrutinize | The scrutinize utility produces a tarball of the data it collects. Previously, scrutinize could fail to create this tarball if it encountered a broken symbolic link. This has been fixed, and the size of the tarball is now logged to scrutinize_collection.log.scrutinize 实用程序会生成其收集的数据的 tarball。以前,如果遇到损坏的符号链接,scrutinize 可能无法创建此 tarball。 此问题已得到修复,tarball 的大小现在记录到 scrutinize_collection.log 中。 |
| VER-92749 | HTTP | Changing the Vertica server certificate triggers an automatic restart of the built-in HTTPS server. When this happened on a busy system, the nodes could sometimes go down. The issue has been fixed. 更改 Vertica 服务器证书会触发内置 HTTPS 服务器的自动重启。 当这种情况发生在繁忙的系统上时,节点有时会出现故障。此问题已得到修复。 |
| VER-92820 | Data load / COPY | In COPY, some missing error checks made it so certain invalid input could crash the database. This has been resolved. 在 COPY 中,缺少一些错误检查,导致某些无效输入可能导致数据库崩溃。此问题已得到解决。 |
23.3.0-8
Updated 02/27/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-89517 | Procedural Languages | Fixed memory leaks that could occur with certain stored procedures. 修复了某些存储过程可能发生的内存泄漏。 |
| VER-91235 | Backup/DR | On HDFS, vbr tried to delete storage files from the wrong fan-out directory. This has been resolved by providing vbr with the correct fan out directory. 在 HDFS 上,vbr 尝试从错误的扇出目录中删除存储文件。此问题已通过向 vbr 提供正确的扇出目录得到解决。 |
| VER-91478 | Execution Engine | Since Version 11.1SP1, in some cases, an optimization in the query plan caused queries running under Crunch Scaling mode of COMPUTE_OPTIMIZED to produce wrong results. This issue has been fixed. 自版本 11.1SP1 起,在某些情况下,查询计划中的优化会导致在 COMPUTE_OPTIMIZED 的 Crunch Scaling 模式下运行的查询产生错误结果。此问题已修复。 |
| VER-91573 | Optimizer | The database query debugging configuration parameter “QueryAssertEnabled”, when set to 1, could cause replay delete query plans to raise INTERNAL errors, failing to run. This issue has been resolved. 数据库查询调试配置参数“QueryAssertEnabled”设置为 1 时,可能会导致重放删除查询计划引发内部错误,从而无法运行。 此问题已解决。 |
| VER-91715 | Optimizer | Queries with identically looking predicates on different tables used in different subqueries where predicates have very different selectivity could result in bad query plans and worse performance due to incorrect estimates on those tables. The issue has been resolved. 如果查询对不同的表使用相同的谓词,而这些子查询中的谓词具有非常不同的选择性,则这些查询可能会导致查询计划不佳,并且由于对这些表的估计不正确而导致性能下降。 此问题已解决。 |
| VER-91743 | Execution Engine | The NULLIF function would infer its output type based on only the first argument. This led to type compatibility errors when the first argument was a small numeric type and the second argument was a much larger numeric type. This has been resolved; now, numeric NULLIF accounts for the types of both arguments when inferring its output type. NULLIF 函数将仅根据第一个参数推断其输出类型。 当第一个参数是较小的数字类型而第二个参数是较大的数字类型时,这会导致类型兼容性错误。 此问题已得到解决; 现在,数字 NULLIF 在推断其输出类型时会考虑两个参数的类型。 |
| VER-91819 | Execution Engine | Vertica’s execution engine pre-fetches data from disk to reduce wait time during query execution. Memory for the pre-fetch buffers was not reserved with the resource manager, and in some situations a pre-fetch buffer could grow to a large size and bloat the memory footprint of a query until it completed. Now queries will account for this pre-fetch memory in requests to the resource manager; and several internal changes mitigate the long-term memory footprint of larger-than-average pre-fetch buffers. Vertica 的执行引擎会从磁盘预取数据,以减少查询执行期间的等待时间。 资源管理器不会保留预取缓冲区的内存,在某些情况下,预取缓冲区可能会增长到很大,并使查询的内存占用膨胀,直到查询完成。 现在,查询将在对资源管理器的请求中考虑此预取内存; 并且一些内部更改可减轻大于平均水平的预取缓冲区的长期内存占用。 |
| VER-92110 | DDL - Projection | When we would scan over a projection sorted by two columns (ORDER BY a,b) and materialize only the second one in the sort order (b), we would mistakenly assume the scan is sorted by that column for the purposes of collecting column statistics. This would lead to possible incorrect results when predicate analysis is enabled, and has now been resolved. 当我们扫描按两列排序的投影(ORDER BY a,b)并仅实现按排序顺序(b)排序的第二个投影时, 我们会错误地认为扫描是按该列排序的,以便收集列统计信息。 这可能会导致在启用谓词分析时出现不正确的结果,现在已解决。 |
23.3.0-7
Updated 01/17/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-90536 | Optimizer | Update statements with subqueries in SET clauses would sometimes return an error. The issue has been resolved. SET 子句中带有子查询的更新语句有时会返回错误。此问题已解决。 |
| VER-90857 | Optimizer | Create Table As Select statements with repeated occurrences of now() and similar functions were inserting incorrect results into the target table. The issue has been resolved. 重复出现 now() 和类似函数的 Create Table As Select 语句会将错误的结果插入目标表。此问题已解决。 |
| VER-91150 | Data load / COPY | The upgrade of the C++ AWS SDK in 12.0.2 caused Vertica to make repeated calls to the metadata server for IAM authentication, affecting performance when accessing S3. Vertica now resets the timestamp to prevent excessive pulling. 12.0.2 中 C++ AWS SDK 的升级导致 Vertica 重复调用元数据服务器进行 IAM 身份验证,影响访问 S3 时的性能。 Vertica 现在会重置时间戳以防止过度拉取。 |
| VER-91190 | Optimizer | In version 10.1, Vertica updated its execution engine to sample execution times and selectivity of query predicates and join predicates to run them in the most efficient order. This has been disruptive to users who wrote queries which depended on a certain evaluation order, in particular that single-table predicates would be evaluated before join conditions. In particular, queries whose single-table predicates filter out data which would raise a coercion error at the join condition would sometimes raise an error after this change due to the join condition being evaluated first. Now we have improved this experience by ensuring that join conditions do not raise type coercion errors when they are evaluated before single-table predicates. 在 10.1 版中,Vertica 更新了其执行引擎,以对查询谓词和连接谓词的执行时间和选择性进行采样,以便以最有效的顺序运行它们。 这对编写依赖于特定评估顺序的查询的用户造成了干扰,特别是单表谓词会在连接条件之前进行评估。 特别是,由于首先评估连接条件,其单表谓词过滤掉在连接条件下会引发强制错误的数据的查询有时会在更改后引发错误。 现在,我们通过确保在单表谓词之前评估连接条件时不会引发类型强制错误来改善这种体验。 |
23.3.0-6
Updated 11/21/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-89566 | Tuple Mover | When the node with the lowest OID became secondary (for example, during cluster demotion), there might have been an increased number of deadlocks and timeouts due to Data Manipulation Language (DML) statements and internal Tuple Mover tasks. This issue has been resolved. 当具有最低 OID 的节点成为次要节点时(例如,在集群降级期间), 由于数据操作语言 (DML) 语句和内部 Tuple Mover 任务,可能会出现更多死锁和超时。 此问题已解决。 |
| VER-89632 | UI - Management Console | The HTTP Strict-Transport-Security (HSTS) response header was added to all MC responses. This header informs the browser that you should access the site through HTTPS only, and that the browser should automatically convert any HTTP connections to HTTPS. 所有 MC 响应都添加了 HTTP 严格传输安全 (HSTS) 响应标头。 此标头告知浏览器您只能通过 HTTPS 访问网站,并且浏览器应自动将所有 HTTP 连接转换为 HTTPS。 |
| VER-89771 | Scrutinize | The parameter --log-limit determines the maximum size of the vertica log that will be preserved when running scrutinize. The limit is applied to the vertica.log file on all nodes in the cluster. The default value changed from 1GB to unlimited. 参数 --log-limit 决定运行 scrutinize 时将保留的 vertica 日志的最大大小。 此限制适用于集群中所有节点上的 vertica.log 文件。 默认值从 1GB 更改为无限制。 |
| VER-89774 | Execution Engine | When casting a negative numeric value to an integer and the result of that cast would be 0, then we would incorrectly get an “out of range” error. This has been resolved. 当将负数值转换为整数时,如果转换结果为 0,我们会错误地收到“超出范围”错误。 此问题已得到解决。 |
| VER-89778 | Security | The following improvements have been made to LDAPLink: LDAP synchronizations have been optimized and now are much faster for nested groups. Query profiling now works with LDAP dryrun functions. LDAPLink 已做出以下改进:LDAP 同步已优化,现在对于嵌套组来说速度更快。 查询分析现在可与 LDAP dryrun 函数配合使用。 |
| VER-89783 | Optimizer | In some circumstances, a UNION query that grouped an expression that coerced a value to a common data type returned an error. This issue has been resolved. 在某些情况下,将强制将值转换为通用数据类型的表达式分组的 UNION 查询会返回错误。 此问题已解决。 |
| VER-89844 | Data Collector | If a notifier was set for some DC tables and then subsequently dropped, it still remained present in those DC table policies. This could cause a very large number of messages in vertica.log and potential node crashes. The issue was resolved by making “DROP NOTIFIER” support the CASCADE logic. Without CASCADE, drop would fail for the notifiers still used by DC tables. 如果为某些 DC 表设置了通知程序,随后又将其删除,则该通知程序仍存在于这些 DC 表策略中。 这可能会导致 vertica.log 中出现大量消息,并可能导致节点崩溃。 通过使“DROP NOTIFIER”支持 CASCADE 逻辑,此问题已得到解决。 如果没有 CASCADE,则 DC 表仍在使用的通知程序的删除将失败。 |
| VER-89908 | Security | Previously, when configuring a chain of certificates longer than a root CA certificate and a client certificate for internode TLS, the configuration would successfully be applied, but cause the cluster to shut down. This has been fixed. 以前,当配置比根 CA 证书和节点间 TLS 的客户端证书更长的证书链时, 配置会成功应用,但会导致集群关闭。 此问题已得到修复。 |
| VER-89916 | Backup/DR | Backups to S3 object storage and Google Cloud Storage failed and returned a “Temp path” error. This issue has been resolved. 备份到 S3 对象存储和 Google Cloud Storage 失败并返回“临时路径”错误。 此问题已解决。 |
| VER-89961 | Data load / COPY | Loading JSON arrays into table columns having different case for JSON key and table column used to fail in some cases. The issue has been fixed. 在某些情况下,将 JSON 数组加载到 JSON 键和表列大小写不同的表列中会失败。 此问题已修复。 |
| VER-89987 | Kafka Integration | When a notifier was set for the NotifierErrors or NotifierStats Data collector (DC) tables, notifications sent with a Kafka notifier might cause a loop that produced an infinite stream of notifications. This might result in severely degradated node performance. This issue has been resolved. Now, notifications are disabled for these DC tables, and any existing notifiers have been removed from these tables. 当为 NotifierErrors 或 NotifierStats 数据收集器 (DC) 表设置通知程序时, 使用 Kafka 通知程序发送的通知可能会导致循环,从而产生无限的通知流。 这可能会导致节点性能严重下降。此问题已解决。 现在,这些 DC 表的通知已被禁用,并且所有现有通知程序都已从这些表中删除。 |
| VER-90065 | ComplexTypes, Data load / COPY | A logic gap in the source code could lead to an infinite loop while loading complex arrays with thousands of elements, causing the DML statement to never complete. This issue has been fixed. 源代码中的逻辑漏洞可能会导致在加载包含数千个元素的复杂数组时出现无限循环, 从而导致 DML 语句永远无法完成。 此问题已得到修复。 |
| VER-90090 | Security | In cases of intermittent network connectivity to an LDAP server, Vertica will now retry bind operations. 当与 LDAP 服务器的网络连接间歇性中断时,Vertica 现在将重试绑定操作。 |
| VER-90106 | Catalog Engine | Queries now run correctly when the files of delete vectors are in different storage locations. 当删除向量的文件位于不同的存储位置时,查询现在可以正确运行。 |
23.3.0-5
Updated 10/10/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-89178 | Spread | Previously, if Vertica received unexpected UDP traffic from its client port, the node could go down. This issue has been resolved. 以前,如果 Vertica 从其客户端端口收到意外的 UDP 流量,则节点可能会关闭。 此问题已得到解决。 |
| VER-89272 | Security | The following improvements have been made to LDAPLink: LDAP synchronizations have been optimized and now are much faster for nested groups. Query profiling now works with LDAP dryrun functions. LDAPLink 已做出以下改进:LDAP 同步已优化,现在对于嵌套组来说速度更快。 查询分析现在可与 LDAP dryrun 函数配合使用。 |
| VER-89274 | Data load / COPY | If a Parquet query or load were to be interrupted (such as by a LIMIT clause, exception during execution, or user cancellation) while the database has configuration parameter “ParquetColumnReaderSize” set to zero, then Vertica could crash. This issue has been resolved. 如果在数据库配置参数“ParquetColumnReaderSize”设置为零的情况下,Parquet 查询或加载被中断 (例如,由于 LIMIT 子句、执行期间出现异常或用户取消), 则 Vertica 可能会崩溃。此问题已解决。 |
| VER-89335 | Data Collector | In some environments the io_stats system view was empty. The monitoring functionality has been improved with better detection of I/O devices. 在某些环境中,io_stats 系统视图为空。 监控功能已得到改进,可以更好地检测 I/O 设备。 |
| VER-89487 | EON, Execution Engine | A LIKE ANY or LIKE ALL expression with a non-constant pattern argument on the right-hand side of the expression sometimes resulted in a crash or incorrect internal error. This issue has been resolved. Now, this type of pattern argument results in a normal error. 如果 LIKE ANY 或 LIKE ALL 表达式的右侧带有非常量模式参数,有时会导致崩溃或不正确的内部错误。 此问题已得到解决。现在,这种类型的模式参数会导致正常错误。 |
23.3.0-4
Updated 04/11/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-88126 | EON | The sync_catalog function failed when MinIO communal storage did not meet read-after-write and list-after-write consistency guarantees. A check was added to bypass this restriction. However, if possible, users should make sure that their MinIO storage is configured for read-after-write and list-after-write consistency. 当 MinIO 公共存储未满足“先读后写”和“先写后列表”一致性保证时,sync_catalog 函数会失败。 添加了检查以绕过此限制。 但是,如果可能,用户应确保其 MinIO 存储已配置为“先读后写”和“先写后列表”一致性。 |
| VER-88631 | Admin Tools | The Admintools stop_db command failed and returned an error that described active sessions prevented the shutdown. This issue has been resolved, and now the stop_db command stops the database with no errors. Admintools stop_db 命令失败并返回错误,描述活动会话阻止关闭。 此问题已解决,现在 stop_db 命令可以停止数据库且不会出现任何错误。 |
| VER-88924 | UI - Management Console | When you provisioned a new database on Amazon Web Services, the operation failed. This issue has been resolved. 在 Amazon Web Services 上配置新数据库时,操作失败。此问题已解决。 |
| VER-88955 | Optimizer | In some circumstances, queries with outer joins or cross joins that also utilized Top-k projections caused a server error. The issue has been resolved. 在某些情况下,使用 Top-k 投影的外连接或交叉连接的查询会导致服务器错误。 此问题已解决。 |
| VER-89101 | Admin Tools | On SUSE Linux Enterprise Server 15, the systemctl status verticad command failed. This issue has been resolved. 在 SUSE Linux Enterprise Server 15 上,systemctl status verticad 命令失败。此问题已解决。 |
23.3.0-3
Updated 09/06/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87331 | Optimizer | In some cases, using SQL macros that return string types could result in core dumps. The issue has been resolved. 在某些情况下,使用返回字符串类型的 SQL 宏可能会导致core dumps。 此问题已解决。 |
| VER-87967 | SDK | Previously, you could not compile Vertica UDx builds with GCC compiler version 13 and higher. This issue has been resolved. 以前,您无法使用 GCC 编译器版本 13 及更高版本编译 Vertica UDx 版本。 此问题已得到解决。 |
| VER-87970 | Kafka Integration | In some circumstances, there were long timeouts or the process might hang indefinitely when the KafkaAvroParser accessed the Avro Schema Registry. This issue has been resolved. 在某些情况下,当 KafkaAvroParser 访问 Avro Schema Registry 时,会出现长时间超时或进程可能无限期挂起的情况。 此问题已得到解决。 |
| VER-88495 | Backup/DR | Every time Vertica tries to load a snapshot, it checks all the storage files. The file check costs too much time and is not necessary to do it so often. This check is now disabled. 每次 Vertica 尝试加载快照时,它都会检查所有存储文件。 文件检查耗费太多时间,没有必要如此频繁地进行。此检查现已禁用。 |
| VER-88496 | Client Drivers - ODBC | Previously, the connection property FastCursorClose was set to {{false}} by default, which prevented you from canceling sqlfetch(); you had to set it to true with {{conn.addToConnString(“FastCursorClose=1”);}} to cancel requests. FastCursorClose is now set to {{true}} by default. 以前,连接属性 FastCursorClose 默认设置为 {{false}},这会阻止您取消 sqlfetch(); 您必须使用 {{conn.addToConnString(“FastCursorClose=1”);}} 将其设置为 true 才能取消请求。 FastCursorClose 现在默认设置为 {{true}}。 |
| VER-88546 | Optimizer | Queries that contained a WITH query that was referred to more than once and also contained multiple distinct aggregates failed with a system error. This issue has been resolved. 包含多次引用的 WITH 查询且包含多个不同聚合的查询因系统错误而失败。 此问题已解决。 |
| VER-88628 | Execution Engine | If there are user-created system tables and IS_SYSTEM_TABLE is set to “true” when you upgrade to 23.3.0, queries on some V_CATALOG system tables fail with an assertion error after you complete the upgrade. 如果在升级到 23.3.0 时存在用户创建的系统表并且 IS_SYSTEM_TABLE 设置为“true”,则升级完成后, 某些 V_CATALOG 系统表上的查询将失败并出现断言错误。 |
| VER-88655 | Optimizer | Queries that contained a WITH query that was referred to more than once, and also contained joins on tables with segmented projections and SELECT DISTINCT or LIMIT subqueries sometimes produced an incorrect result. This issue has been resolved. 包含多次引用的 WITH 查询以及包含分段投影的表上的联接和 SELECT DISTINCT 或 LIMIT 子查询的查询有时会产生错误结果。 此问题已解决。 |
23.3.0-2
Updated 08/29/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87253 | ComplexTypes, Execution Engine | The optimization that makes it so that EXPLODE on complex types only materializes fields that are needed in the query was not applied to the similar UNNEST function. This has been resolved, and now UNNEST similarly prunes out unused fields from scans/loads. 优化使得复杂类型的 EXPLODE 仅实现查询中需要的字段, 但该优化并未应用于类似的 UNNEST 函数。 此问题已得到解决,现在 UNNEST 会同样从扫描/加载中删除未使用的字段。 |
| VER-87775 | Admin Tools, Data Collector | If you revived a database and the EnableDataCollector parameter was set to 1, you could not start the database after it was revived. This issue was resolved. To start the database, disable the cluster lease check. 如果您恢复了数据库并且 EnableDataCollector 参数设置为 1,则恢复后无法启动数据库。 此问题已解决。 要启动数据库,请禁用集群租约检查。 |
| VER-87799 | Optimizer | During the planning stage, updates on tables with thousands of columns using thousands of SET USING clauses took a long time. Planning performance for these updates was improved. 在规划阶段,使用数千个 SET USING 子句对包含数千列的表进行更新需要很长时间。 这些更新的规划性能已得到改善。 |
| VER-87820 | Execution Engine | When casting a numeric to an integer, the bounds of acceptable values were based on the NUMERIC(18, 0) type, rather than the INTEGER type. This meant that numbers that were 19 digits, but could still fit in a 64-bit integer, would incorrectly error out. Casting a numeric to an integer now checks according to the proper bounds for the INTEGER type. 将数字转换为整数时,可接受值的界限基于 NUMERIC(18, 0) 类型,而不是 INTEGER 类型。 这意味着,如果数字为 19 位,但仍可放入 64 位整数中,则会出现错误。 现在,将数字转换为整数时会根据 INTEGER 类型的正确界限进行检查。 |
| VER-87876 | Catalog Engine | Previously, when a cluster lost quorum and switched to read-only mode or stopped, some transaction commits in the queue might get processed. However, due to the loss of quorum, these commits might not have been persisted. These “transient transactions” were reported as successful, but they were lost when the cluster restarted. Now, when Vertica detects a transient transaction, it issues a WARNING so you can diagnose the problem, and it creates an event in ACTIVE_EVENTS that describes what happened. 以前,当集群失去仲裁并切换到只读模式或停止时,队列中的某些事务提交可能会得到处理。 但是,由于仲裁丢失,这些提交可能未被持久化。 这些“临时事务”被报告为成功,但在集群重新启动时它们丢失了。 现在,当 Vertica 检测到临时事务时,它会发出警告,以便您可以诊断问题,并在 ACTIVE_EVENTS 中创建一个事件来描述发生了什么。 |
| VER-87962 | Tuple Mover | The Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved. 当另一个 MERGEOUT 作业正在同一投影上运行时,Tuple Mover 在同一投影上记录了大量 PURGE 请求。 此问题已解决。 |
| VER-87965 | Optimizer | Queries with outer joins over subqueries with WHERE clauses that contain AND expressions with constant terms sometimes returned an error. This issue has been resolved. 带有外连接的查询(包含带有常量项的 AND 表达式的 WHERE 子句)有时会返回错误。 此问题已解决。 |
| VER-87975 | Optimizer | When creating a UDx side process, Vertica required that the current time zone have a name. This caused a crash when a UDx side process was created under a time zone with a GMT offset rather than a name. This issue has been resolved. 创建 UDx 端进程时,Vertica 要求当前时区具有名称。当在具有 GMT 偏移量而非名称的时区下创建 UDx 端进程时,这会导致崩溃。 此问题已解决。 |
| VER-88005 | Execution Engine | Queries with large tables stopped the database because the indices that Vertica uses to navigate the tables consumed too much RAM. This issue has been resolved, and now the indices use less RAM. 查询大型表会导致数据库停止运行,因为 Vertica 用于浏览表的索引消耗了过多的 RAM。 此问题已解决,现在索引消耗的 RAM 更少。 |
| VER-88115 | Client Drivers - ODBC | Previously, the ODBC driver could return 64-bit FLOATs with incorrect values in its last bit, which are not IEEE-compliant. This has been fixed. 以前,ODBC 驱动程序可能会返回最后一位带有错误值的 64 位 FLOAT, 这不符合 IEEE 标准。此问题已得到修复。 |
| VER-88205 | Optimizer | In some query plans with segmentation across multiple nodes, Vertica would get an internal optimizer error when trying to prune out unused data edges from the plan. This issue has been resolved. 在某些跨多个节点分段的查询计划中, Vertica 在尝试从计划中删去未使用的数据边缘时会出现内部优化器错误。 此问题已解决。 |
| VER-88227 | EON | In rare circumstances, the automatic sync of catalog files to the communal storage stopped working on some nodes. Users could still manually sync with sync_catalog(). The issue has been resolved. 在极少数情况下,目录文件到公共存储的自动同步在某些节点上停止工作。 用户仍然可以使用 sync_catalog() 手动同步。 此问题已解决。 |
| VER-88281 | Performance tests | In some cases, the NVL2 function caused Vertica to crash when it returned an array type. This issue has been resolved. 在某些情况下,NVL2 函数在返回数组类型时会导致 Vertica 崩溃。此问题已解决。 |
| VER-88337 | Procedural Languages | When a Stored Procedure executed a subquery that included constraints, it returned an incorrect value. This issue has been resolved. 当存储过程执行包含约束的子查询时,它会返回不正确的值。此问题已解决。 |
| VER-88341 | Backup/DR | Backup and restore operations failed on FIPS-enabled systems. This issue has been resolved. 在启用 FIPS 的系统上备份和恢复操作失败。此问题已解决。 |
23.3.0-1
| Issue Key | Component | Description |
|---|---|---|
| VER‑87918 | UI - Management Console | When you provisioned a database from the Management Console, there was a connection issue that prevented the Management Console from communicating with the new database. This issue has been resolved. 当您从管理控制台配置数据库时,存在连接问题,导致管理控制台无法与新数据库通信。 此问题已解决。 |
23.3.0-0
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-82827 | ResourceManager | Previously, user-defined global resource pools could have the same name as subcluster resource pools. This is no longer supported. However, different subclusters can still have resource pools with the same name. When you create a new subcluster resource pool, Vertica checks whether there is any global pool with the same external name, and vice versa. For example: => SELECT name, subcluster_name FROM RESOURCE_POOLS WHERE name = ‘test_pool’; name | subcluster_name --------{-}{{-}}{{-}}+{{-}}{{-}}{-}------------------- test_pool | secondary_subcluster (1 row) => CREATE RESOURCE POOL test_pool maxmemorysize ‘2G’; ROLLBACK 4593: Resource pool “test_pool” already exists 以前,用户定义的全局资源池可以与子集群资源池同名。 现在不再支持这种做法。但是,不同的子集群仍然可以拥有同名的资源池。 当您创建新的子集群资源池时,Vertica 会检查是否有任何全局池具有相同的外部名称,反之亦然。 例如: => SELECT name, subcluster_name FROM RESOURCE_POOLS WHERE name = ‘test_pool’; name | subcluster_name --------{-}{{-}}{{-}}+{{-}}{{-}}{-}------- test_pool | secondary_subcluster (1 row) => CREATE RESOURCE POOL test_pool maxmemorysize ‘2G’; ROLLBACK 4593:资源池“test_pool”已存在 |
| VER-83661 | Data Export, S3 | Export to Parquet sometimes logged errors in a DC table for successful exports. This issue has been resolved. 导出到 Parquet 有时会在 DC 表中记录成功导出的错误。 此问题已解决。 |
| VER-83998 | Admin Tools, Security | Paramiko has been upgraded to 2.10.1 to address CVE-2022-24302. Paramiko 已升级至 2.10.1,以解决 CVE-2022-24302。 |
| VER-84276 | Execution Engine | Predicate reordering optimization moved a comparison against a constant ahead of SIP filters, but the SIP filter needed to be evaluated after the constant predicate. This issue has been resolved: now, predicates are not reordered when a stateful SIP filter needs to be evaluated in a particular order. 谓词重新排序优化将与常量的比较移到了 SIP 过滤器之前,但 SIP 过滤器需要在常量谓词之后进行评估。 此问题已解决: 现在,当需要按特定顺序评估有状态的 SIP 过滤器时,谓词不会重新排序。 |
| VER-84493 | Optimizer | Queries eligible for TOPK projections that were also eligible for elimination of no-op joins would sometimes exit with internal error. The issue has been resolved. 符合 TOPK 投影条件且符合无操作连接消除条件的查询有时会因内部错误而退出。 此问题已解决。 |
| VER-84894 | Client Drivers – OLEDB | Previously, OLEDB connections could timeout, causing SSAS tools to crash. This issue has been resolved. 以前,OLEDB 连接可能会超时,导致 SSAS 工具崩溃。 此问题已得到解决。 |
| VER-85008 | Hadoop | A logic error in pushing predicates down to prune Parquet row groups and ORC stripes would sometimes result in false positives, pruning data which actually should have passed the predicate. This issue is known to occur when using IN or NOT IN expressions and applying an expression to transform the ORC or Parquet column on the left-hand side of the expression. This issue has been resolved. 在将谓词下推以修剪 Parquet 行组和 ORC 条带时,逻辑错误有时会导致误报,从而修剪实际上应该通过谓词的数据。 已知在使用 IN 或 NOT IN 表达式并应用表达式来转换表达式左侧的 ORC 或 Parquet 列时会发生此问题。 此问题已得到解决。 |
| VER-85153 | AP-Geospatial | If you nested multiple geospatial functions when reading from a Parquet file, there was an issue finding usable memory that made the database crash. This issue has been resolved. 如果您在读取 Parquet 文件时嵌套了多个地理空间函数,则会出现查找可用内存的问题, 从而导致数据库崩溃。 此问题已解决。 |
| VER-85187 | Execution Engine | When pushing down predicates of a query that involved a WITH clause being turned into a shared temp relation, an IS NULL predicate on the preserved side of a left outer join was pushed below the join. As a result, rows that should have been filtered out were erroneously included in the result set. This issue has been resolved by updating the predicate pushdown logic. 当将涉及 WITH 子句的查询的谓词下推为共享临时关系时,左外连接保留侧的 IS NULL 谓词被推到连接下方。 结果,本应被过滤掉的行被错误地包含在结果集中。 此问题已通过更新谓词下推逻辑得到解决。 |
| VER-85260 | Execution Engine | LIKE operators that were qualified by ANY and ALL did not correctly evaluate multiple string constant arguments. This issue has been resolved. 由 ANY 和 ALL 限定的 LIKE 运算符无法正确评估多个字符串常量参数。此问题已解决。 |
| VER-85311 | FlexTable | In some cases COMPUTE_FLEXTABLE_KEYS used to assign non-string data types to keys where string data type was more suitable. The algorithm was improved to prefer string types in those cases. 在某些情况下,COMPUTE_FLEXTABLE_KEYS 会将非字符串数据类型分配给更适合使用字符串数据类型的键。 改进后的算法在这些情况下会优先使用字符串类型。 |
| VER-85626 | Execution Engine | Expressions resembling expr = ANY(string_to_array(list_of_string_constants)) had a logic error that resulted in undefined behavior. This issue has been resolved. 类似于 expr = ANY(string_to_array(list_of_string_constants)) 的表达式存在逻辑错误,导致未定义的行为。 此问题已解决。 |
| VER-86015 | Admin Tools | The default logrotate configuration uses “dateext” with the default date format. Using the default date format limits log rotation to no more than once per day. For more frequent rotations, administrators can edit their logrotate configuration files to use the “dateformat” string. For details, see the linux man page on logrotate. 默认的 logrotate 配置使用“dateext”和默认日期格式。 使用默认日期格式将日志轮换限制为每天不超过一次。 如需更频繁地轮换,管理员可以编辑其 logrotate 配置文件以使用“dateformat”字符串。有关详细信息,请参阅 logrotate 上的 Linux 手册页。 |
| VER-86019 | Kubernetes | If you used VerticaDB operator v1.10.0 and you ran a sidecar container with your VerticaDB custom resource, the operator sometimes failed to restart the Vertica process. This issue was resolved. 如果您使用了 VerticaDB 操作器 v1.10.0 并使用 VerticaDB 自定义资源运行了 sidecar 容器,则操作器有时无法重新启动 Vertica 进程。 此问题已解决。 |
| VER-86060 | Recovery | When you applied a swap partition event to one table, the other table involved in the same swap partition event was removed from the dirty transactions list. This issue has been resolved. Now, both tables involved in the same swap partition event are in the dirty transaction list. 当您将交换分区事件应用于一个表时,涉及同一交换分区事件的另一个表将从脏事务列表中删除。 此问题已解决。现在,涉及同一交换分区事件的两个表都在脏事务列表中。 |
| VER-86098 | FlexTable | FCSVParser used to load empty strings for values that matched with COPY NULL parameter instead of loading NULL. This issue happened only on pure flex tables. The issue has been resolved. FCSVParser 过去会为与 COPY NULL 参数匹配的值加载空字符串,而不是加载 NULL。 此问题仅发生在纯弹性表中。 此问题已解决。 |
| VER-86132 | Client Drivers - VSQL | Previously, Vertica would load an incorrect number of files if you attempted to load more than 65,535 files with COPY FROM LOCAL. This has been fixed; COPY FROM LOCAL now properly loads up to 4,294,967,295 files. 以前,如果您尝试使用 COPY FROM LOCAL 加载超过 65,535 个文件,Vertica 会加载错误数量的文件。 此问题已得到修复; COPY FROM LOCAL 现在可以正确加载最多 4,294,967,295 个文件。 |
| VER-86198 | Execution Engine | When a database had many storage locations, query and other operations such as analyze_statistics() were sometimes slow. This issue has been resolved. 当数据库具有许多存储位置时,查询和其他操作(例如 analyze_statistics())有时会很慢。 此问题已解决。 |
| VER-86223 | ComplexTypes | The flex JSON and Avro parsers did not always correctly handle excessively large ARRAY[VARCHAR] inputs. In certain cases this would lead to undefined behavior resulting in a crash. This issue has been resolved. flex JSON 和 Avro 解析器并不总是能正确处理过大的 ARRAY[VARCHAR] 输入。 在某些情况下,这会导致未定义的行为,从而导致崩溃。 此问题已解决。 |
| VER-86438 | Machine Learning | Loading certain types of data sometimes led to an empty string being non-empty, which produced a variety of errors. This issue has been resolved. 加载某些类型的数据有时会导致空字符串变为非空字符串,从而产生各种错误。 此问题已解决。 |
| VER-86442 | ComplexTypes | In some circumstances, queries that had valid scalar data types were returning a VIAssert error. This issue has been resolved. 在某些情况下,具有有效标量数据类型的查询会返回 VIAssert 错误。 此问题已解决。 |
| VER-86494 | Client Drivers – Python, Sessions | Loading certain data could sometimes cause an empty string to not actually be empty, which could lead to a variety of errors. This issue has been resolved. 加载某些数据有时会导致空字符串实际上并非为空,从而导致各种错误。 此问题已解决。 |
| VER-86500 | Execution Engine | In some circumstances, the database crashed with errors when you upgraded from Vertica version 11.1.1 and higher to Vertica version 12.0.4. This issue has been resolved. 在某些情况下,当您从 Vertica 版本 11.1.1 及更高版本升级到 Vertica 版本 12.0.4 时,数据库会因错误而崩溃。 此问题已解决。 |
| VER-86507 | Catalog Engine | Truncating a local temporary table unnecessarily required a global catalog lock, as temporary tables are session scoped. This issue has been resolved. 截断本地临时表不必要地需要全局目录锁定,因为临时表是会话范围的。 此问题已解决。 |
| VER-86578 | DDL | In versions 12.0.2 and 12.0.3, the QUERY_REQUESTS system table displayed incorrectly when a query was executing. This issue has been resolved. 在版本 12.0.2 和 12.0.3 中,执行查询时 QUERY_REQUESTS 系统表显示不正确。 此问题已解决。 |
| VER-86692 | Optimizer | Merge queries with an INTO…USING clause that calls a subquery would sometimes return an error when merging into a table with Set Using/Default query columns. The issue has been resolved. 合并具有调用子查询的 INTO…USING 子句的查询时,在合并到具有 Set Using/Default 查询列的表中时,有时会返回错误。 此问题已解决。 |
| VER-86701 | Catalog Engine, Security | When upgrading a database, any user or role with the same name as a predefined role is renamed. 升级数据库时,任何与预定义角色同名的用户或角色都将被重命名。 |
| VER-86708 | Optimizer | In some circumstances, queries that had valid scalar data types were returning a VIAssert error. This issue has been resolved. 在某些情况下,具有有效标量数据类型的查询会返回 VIAssert 错误。 此问题已解决。 |
| VER-86709 | Execution Engine | In some contexts, equivalent numeric types were considered incompatible with each other, which resulted in errors. This issue has been resolved. 在某些情况下,等效数字类型被认为彼此不兼容,从而导致错误。 此问题已解决。 |
| VER-86724 | Catalog Engine, Performance tests | In version 12.0.0, querying system tables could be slower than in previous versions. Version 12.0.4-8 adjusts the system table segmentation to improve system table queries. 在 12.0.0 版本中,查询系统表可能会比以前的版本慢。 版本 12.0.4-8 调整了系统表分段以改善系统表查询。 |
| VER-86734 | Kafka Integration, Security | Previously, using a Kafka Notifier with SASL_SSL or SASL_PLAINTEXT would incorrectly use SSL instead. This issue has been resolved. 以前,使用带有 SASL_SSL 或 SASL_PLAINTEXT 的 Kafka 通知程序会错误地使用 SSL。 此问题已解决。 |
| VER-86804 | Security | Previously, adapter_parameters values in the NOTIFIER system table would be truncated if they exceeded 128 characters. This limit has been increased to 8196 characters. 以前,如果 NOTIFIER 系统表中的 adapter_parameters 值超过 128 个字符,则会被截断。 此限制已增加到 8196 个字符。 |
| VER-86833 | Execution Engine | When evaluating check constraints on tables with multiple projections with different sort orders, Vertica would sometimes read the data from the table incorrectly. This issue has been resolved. 在评估具有不同排序顺序的多个投影的表的检查约束时,Vertica 有时会错误地从表中读取数据。 此问题已解决。 |
| VER-86864 | EON | Under certain circumstances, depending on the frequency and length of the depot fetching activity, a file could not be re-fetched after its eviction—either automatic or cleared manually—unless the node was restarted. This issue has been resolved. 在某些情况下,根据仓库提取活动的频率和长度,文件在被逐出后无法重新提取(无论是自动提取还是手动清除),除非重新启动节点。 此问题已得到解决。 |
| VER-86901 | Backup/DR | When each node in a cluster pointed to a different backup location, the backup location was non-deterministic, and there were inconsistent failures. This issue has been resolved. 当集群中的每个节点指向不同的备份位置时,备份位置是不确定的,并且会出现不一致的故障。 此问题已解决。 |
| VER-86984 | Optimizer | If a user-defined SQL function that returns a string was nested within a call to TRIM which was nested within a call to NULLIF (for example: “NULLIF(TRIM(user_function(value),’ '))”), Vertica could return an invalid result or the error “ERROR: ICU locale operation error: ‘U_BUFFER_OVERFLOW_ERROR’”. This issue has been resolved. 如果返回字符串的用户定义 SQL 函数嵌套在对 TRIM 的调用中, 而后者又嵌套在对 NULLIF 的调用中(例如:“NULLIF(TRIM(user_function(value),’ '))”),Vertica 可能会返回无效结果或错误“ERROR:ICU 区域设置操作错误:‘U_BUFFER_OVERFLOW_ERROR’”。 此问题已解决。 |
| VER-86993 | ComplexTypes | Previously, the flex table and Kafka parsers could crash if they tried to load array data that is too large for the target table. This behavior was fixed but introduced a change where those array values would cause the whole row to be rejected instead of setting the array value to NULL. Now, the default behavior is to set the data cell to NULL if the array value is too large. This can be overridden with the “reject_on_materialized_type_error” parameter, which will have the rows be rejected instead. 以前,如果 flex 表和 Kafka 解析器尝试加载对于目标表来说太大的数组数据,它们可能会崩溃。 此行为已得到修复,但引入了一个更改,即这些数组值会导致整行被拒绝,而不是将数组值设置为 NULL。 现在,如果数组值太大,默认行为是将数据单元设置为 NULL。 这可以通过“reject_on_materialized_type_error”参数覆盖,这将导致行被拒绝。 |
| VER-87003 | Execution Engine | Because the explode function is a 1:N transform function, using ORDER BY in its OVER clause has an undefined effect. Previously, using an ORDER BY clause in the OVER clause of Explode could result in an INTERNAL error if the configuration parameter TryPruneUnusedDataEdges was set to 1. This issue has been resolved. 由于explode函数是1:N转换函数,因此在其OVER子句中使用ORDER BY会产生未定义的影响。 以前,如果配置参数TryPruneUnusedDataEdges设置为1,则在Explode的OVER子句中使用ORDER BY子句可能会导致内部错误。 此问题已解决。 |
| VER-87431 | ComplexTypes | The flex and kafka parsers would erroneously not respect the parameter “reject_on_materialized_type_error” in cases where an array was too large for the target column, and no element was rejected. Previously, such values would always be rejected. This has been corrected, and now if “reject_on_materialized_type_error” is false, those values will be set to NULL instead. 如果数组对于目标列来说太大,并且没有元素被拒绝,flex 和 kafka 解析器会错误地不遵守参数“reject_on_materialized_type_error”。 以前,这样的值总是会被拒绝。 这个问题已经得到纠正,现在如果“reject_on_materialized_type_error”为 false,这些值将被设置为 NULL。 |
| VER-87537 | Documentation, Installation Program | Some characters did not render correctly when specific commands were copied and pasted from the documentation. This issue has been resolved. 从文档中复制粘贴特定命令时,某些字符无法正确呈现。 此问题已解决。 |
| VER-87803 | ComplexTypes, Execution Engine | When rewriting a CROSS JOIN UNNEST query into an equivalent query that puts the UNNEST in a subquery, requesting scalar columns from a table with larger complex columns could lead to an INTERNAL error. This has been resolved. 将 CROSS JOIN UNNEST 查询重写为将 UNNEST 放在子查询中的等效查询时,从具有较大复杂列的表中请求标量列可能会导致内部错误。 此问题已解决。 |
| VER-87856 | DevOps | Fixed RPM digests by installing a newer version of the RPM on our build container when building RPMs. 通过在构建 RPM 时在我们的构建容器上安装较新版本的 RPM 来修复 RPM 摘要。 |

最后修改时间:2024-09-27 14:26:11
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




