Abstract: Recently, text-to-image generation models have gained tremendous popularity due to their capabilities to produce accurate, diverse and even creative images from text prompts. However, text prompts are highly ambiguous in terms of conveying visual control. For example, if we want to generate an image with "my own backpack" or generate an image with "my backyard" as the background. These control signals cannot be well represented as text. Therefore, we need diverse types of control signals to complement the text-to-image generation process.
Title: Using Graph Databases for the Semantic Enrichment and Discovery of Heterogeneously Described Metadata and Data: The Case of Livestock Population Data