I wonder if I put a few words here?

Category: Assignment 1 – Midterm Review – Module 1&2 Blog Posts and Comments

Generative AI: using Craiyon and Chat GPT

By Mya Keay

In this Module we were instructed to look into and explore the world of generative AI. The use of AI over the past two years has increased dramatically since the introduction of Chat GPT on November 30th, of 2022. Since then new technologies and AI platforms have been popping up everywhere. As we are meant to learn about generative AI systems that are new to us, I decided to use the website Craiyon.com for the assignment. 

What is generative AI?:

Before delving into my experience using Craiyon, I wanted to first establish a base point of what Generative AI is. Generative Artificial Intelligence is a system or type of artificial intelligence that aids in creating new ideas, art, music, presentations, or simply just assists the user with tasks such as grammar or video editing. The main difference compared to general or traditional AI is that generative AI is automated. Meaning it can create something that hasn’t been started or made yet. You could ask generative AI to create a presentation or image with very few prompts and it will automatically generate what you are asking for. Be it a song, a short story, an essay, a presentation, even a piece of art.

My use of generative AI:

The generative AI I chose to use for this assignment was Craiyon.com. Crayon.com allows users to type in a prompt and within 60 seconds be given an array of artwork generated by AI to match the prompt given 

Fig. 1. “Draw a pig playing music” prompt, Craiyon, version 4, OpenAI, 10 Oct. 2024, craiyon.com.

The image above was the first attempt I had using this website. I started off with a basic prompt “draw a pig playing music”. What I realised from the image above was that being as my prompt was simple I would get a simple image in return. I had not specified how I want the pig to look or what instrument, therefore the pig is holding a string, the music notes are more so squiggles and the face is slightly deformed. I decided to ask again but this time giving more detail, asking for “A joyous pig with a big smile playing the drums.” 

Fig. 2. “A joyous pig with a big smile playing the drums” prompt, Craiyon, version 4, OpenAI, 10 Oct. 2024, craiyon.com.

As shown the more detail I gave the better the quality I received. What I learned from this is that when using AI you need to be specific in what you are searching. AI is unable to distinguish real from fake so in order to get the results you are wanting you need to specify what it is you want. 

What would I use AI tools for in the future and what would I not use them for:

In the future I would definitely use AI tools such as ChatGPT, Crayon, and others to help create a general idea of a project perhaps. If I was tasked with drawing a dog wearing sunglasses for an example, I would ask Generative AI to draw me an example, not to use, but to have as a guide. The same goes for tools such as ChatGPT. I would use it to help me brainstorm but not to write a portion of the assignment. That being because Generative AI and AI as a whole is not always 100% factually correct. Programs such as chatGPT only have the information that they are programmed with. If a question is presented that the system does not know the answer to 10-20% of the time it will make up an answer based on what it predicts the answer could be based on the information it does know. Making it not truly a reliable source. Therefore I would use AI to aid in the creation of Ideas and brainstorming but would not rely on it nor use it to create an assignment or presentation that I would then show or submit. 

That being said, I have used Chat GPT to create a SMAR analysis on the Craiyon website. 

SMAR analysis on Craiyon.com:

Before using AI to create an analysis I created my own breakdown to see if what I received was accurate to what I had believed about the site and how it fit into the SMAR model. I believe that the LLM (Large Language Models) was able to accurately capture what I had thought about the site. 

The SMAR model helps users to understand how the technology being used can help enhance learning and creativity.

  1. Substitution: 

Craiyon uses a text to image generator. This allows users to provide prompts to create an image, and then be given art to match the prompts provided. This substitutes the need for the physical side of producing art, such as manually drawing.

  1. Augmentation:

Instead of taking the time to produce a prototype by sketching which they would then alter and refine over time. The site uses augmentation to provide instant results which users can then modify by increasing or correlating the specifications.

  1. Modification:

Because users provide the prompts they can alter or change their prompts as often as they’d like  to explore other pathways or options. This allows for an endless amount of modifications in order to find what sticks with you the best.

  1. Redefinition: 

It opens the door for users to express their creativity in a way perhaps almost impossible by hand. As users can generate extremely imaginative and detailed images, they are able to explore ideas that might not be achievable otherwise. It also allows people to engage in abstract ideas which can make learning with this tool more interactive and engaging.

Craiyon demonstrates how generative AI can not only replace traditionally creative methods but also enhance people’s creativity by allowing them to use new abstract forms of expression.

 â€śHow does the Craiyon website use substitution,augmentation, modification, and redefinition” prompt. ChatGPT4.0, 14 Mar. version, OpenAI, 10 Oct. 2024, chat.openai.com/chat.

What are the strengths and weaknesses of the site?:

Strengths 

  • User friendly: Crayon has a simple design that makes it easy for all users to navigate.
  • Free to use: it allows users to generate endless images for free.
  • Ability to share: the site allows users to share the creations made to others, allowing people to engage with others

Weaknesses

  • Image quality: the quality of the image is less sophisticated and images are not always accurate.
  • Site limitation: because the site is free it has a much longer leading time to receive images unless you pay for an upgrade.
  • Limited editing: although you can alter your prompts you  aren’t able to edit the images as a whole
  • Theft: because it uses images from around the web and combines them there is risk of copyright issues.

Multimedia Learning: Mayer’s Cognitive Theory of Multimedia Learning and screencast

Written by: Mya Keay

Mayer’s Cognitive Theory of Multimedia Learning, discusses how we learn and how we receive, process and retain information

In this video I demonstrate using the website Vimeo in order to edit and create videos. this is done by recoding a zoom meeting.

The dual coding theory is one that I felt stuck with me the most.

The dual coding theory which was introduced by Allan Paivio, suggests that the brain has two separate systems for which we process information. The first being auditory and the second visual. The auditory pathway allows us to process information such as music, and speech while the visual allows for the understanding of visual information and cues such as images and demonstrations. The theory suggests that by having these two pathways we are able to process information from both categories at the same time. This allows for a greater understanding as we are then able to “link” the auditory and visual we learned to one another. Allowing for easier recollection.

I find that using auditory and visual cues simultaneously allows for the viewer to create a deeper understanding of the topic as they are able to not only see how something is done but also hear the step by step. The two linking together creates a stronger sense of understanding of the topic discussed.

Creating the Screencast:

I found that while I was creating my screencast I was doing my best to use the dual coding theory. When I discussed a function of the website I also discussed how to access the function and what it is that it does, so that both the auditory and visual pathways were being used. Additionally I found that I had the redundancy principle in the back of my mind. Deciding it was better in this instance to use graphics and narration rather than text as it would be simpler to demonstrate, therefore easier to process. I kept in mind not to add in unnecessary images or background sounds to avoid over complicating, and just to highlight the important details, limiting the risk over of overloading the viewer (coherence principle and signalling principle).

Reflection Question:

Who did you imagine as the audience for this screencast? How did that impact your design choices?

  • I imagined the audience for this screencast as people who have not used a video editing program before, such as my grandmother. I often find myself teaching my grandma how to edit her videos and images but the information never fully is retained. Therefore I made this video with the thought to break it down to the base form so that even those with little to no understanding would be able to have a greater idea of how to edit their videos. With little to no help aside from the video.
  • It impacted the way I went about the video because, if I were to explain this video to people in their late teens, or early adult lives, I would assume most if not all have a greater understanding of video editing because of the use of social media. Therefore I would have discussed the software in greater detail such as how to overlap outside audio and visuals, or the additional functions that can be used to edit the quality of the images or video. However, because I was discussing it with the mindset that people with no knowledge on the topic would watch it, I was ensuring that I only discussed the basics and didnt overcomplicate it.

© 2025 Mya Lynne EDCI337

Theme by Anders NorenUp ↑