Skip to content
This repository has been archived by the owner on Jan 22, 2019. It is now read-only.

Column order of CSV schema is ignored when @JsonFormat(shape = JsonFormat.Shape.ARRAY) is present #68

Open
georgewfraser opened this issue Feb 7, 2015 · 7 comments

Comments

@georgewfraser
Copy link
Contributor

Here is a minimal example demonstrating the bug:

https://gist.github.com/georgewfraser/a2f722bdde3d4194b9ee

@georgewfraser
Copy link
Contributor Author

There is a workaround: use @JsonPropertyOrder({...}) to specify the same order you specify when building the CSV schema.

@cowtowncoder
Copy link
Member

Thank you for reporting this.

@cowtowncoder
Copy link
Member

Ok, I think I know why this occurs. Since Shape.ARRAY basically converts output to use an array structure, underlying CSV writer has no knowledge that output has any relation to object described by schema. Instead, it is just a sequence of values. The work-around works because it does change ordering that databind uses to send elements.

This is not easy to change, so your work-around is useful.

But one thing I am wondering is whether you could instead just avoid using Shape.ARRAY output here? I am guessing you may be using it to support more compact output in JSON; otherwise there is no benefit for specific case of CSV handling (since output is not including property names anyway, except in optional header row).

@georgewfraser
Copy link
Contributor Author

I am using Shape.ARRAY for deserialization, that is the format I get it in. It would be fine if CsvMapper just ignored Shape.ARRAY, but it has to be there so that I can deserialize correctly. Basically my workflow is:

Mailchimp's stupid API format => Nice java object => Nice csv file

@cowtowncoder
Copy link
Member

Ah ok. Yes, I figured there probably is a reason for that being there. :-)

I am not against making this work better (it should work better). The practical problem is just separation of higher-level data-binding code that handles translation between Object/Array, and underlying CSV backend; that is, how to handle interaction.

Perhaps one option would be to add one more introspection option into streaming generator/parser, which indicates whether format backend supports "as-array" POJO handling or not. If not, it would simply ignore handling and use default POJO processing.
Something similar is being used for different aspects: for example, native object- and type-id handling is only supported by YAML at this point, and data-binding interacts by asking parser/generator whether they have such capability.

@georgewfraser
Copy link
Contributor Author

I don't really know a lot about the internals of Jackson so I can't really offer any insight. But at least the tests in the linked Gist should provide a clear definition of the problem. And hopefully others who need to do this right now will be able to use the workaround. Good luck!

@cowtowncoder
Copy link
Member

@georgewfraser Right I was more "writing out aloud", both to validate my idea, and to document it for future. There are enough things to work on so that it is quite possible to forget about valid implementation ideas, or possible constraints, unless I write these down.
Reproduction is easy indeed, thank you for providing the gist.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants