Mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wes McKinney (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ARROW-462) [C++] Implement in-memory conversions between non-nested primitive types and DictionaryArray equivalent
Date Fri, 06 Jan 2017 17:04:58 GMT

     [ https://issues.apache.org/jira/browse/ARROW-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Wes McKinney updated ARROW-462:
-------------------------------
    Description: 
We use a hash table to extract unique values and dictionary indices. There may be an opportunity
to consolidate common code from the dictionary encoding implementation implemented in parquet-cpp
(but the dictionary indices will not be run-length encoded in Arrow):

https://github.com/apache/parquet-cpp/blob/master/src/parquet/encodings/dictionary-encoding.h

This functionality also needs to permit encoding split across multiple record batches -- so
the hash table would be a stateful entity, and we can continue to hash more chunks of data
to dictionary-encode multiple arrays with a shared dictionary at the end. 

  was:
We use a hash table to extract unique values and dictionary indices. There may be an opportunity
to consolidate common code from the dictionary encoding implementation implemented in parquet-cpp
(but the dictionary indices will not be run-length encoded in Arrow):

https://github.com/apache/parquet-cpp/blob/master/src/parquet/encodings/dictionary-encoding.h


> [C++] Implement in-memory conversions between non-nested primitive types and DictionaryArray
equivalent
> -------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-462
>                 URL: https://issues.apache.org/jira/browse/ARROW-462
>             Project: Apache Arrow
>          Issue Type: New Feature
>          Components: C++
>            Reporter: Wes McKinney
>
> We use a hash table to extract unique values and dictionary indices. There may be an
opportunity to consolidate common code from the dictionary encoding implementation implemented
in parquet-cpp (but the dictionary indices will not be run-length encoded in Arrow):
> https://github.com/apache/parquet-cpp/blob/master/src/parquet/encodings/dictionary-encoding.h
> This functionality also needs to permit encoding split across multiple record batches
-- so the hash table would be a stateful entity, and we can continue to hash more chunks of
data to dictionary-encode multiple arrays with a shared dictionary at the end. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message