How To Add Tokenization For Enhanced Unicode Data When Using Character Maps For International Data In Oracle Enterprise Data Quality (EDQ)
Last updated on DECEMBER 02, 2016
Applies to:Oracle Enterprise Data Quality - Version 8.0.1 to 9.0.6 [Release 8.0 to 9.0]
Information in this document applies to any platform.
How can I extend the tokenization for Character Mapping for e.g. the Parse processor and Pattern Mapping processor to include the full unicode range?
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms