Pages

Showing posts with label Sora. Show all posts
Showing posts with label Sora. Show all posts

Monday, December 30, 2024

2024: A Look Back To Look Forward

As we approach the end of December, it is time to look back on the year so we may look ahead to 2025. Reflection on the past can be invaluable in creating a path towards the future. Some of the stories that I write involve me taking a deep look at the world. In fact, several of those stories are even more prescient now than when I wrote them. One of them stands out because of recent world events. It is a post-apocalyptic thriller meant to reflect the present and the past into an imagined future. A cause-and-effect story that may need to be shopped around in the new year. 

It's one of a dozen such stories where I used the present and the past to imagine what the world might be like in the future. Along with writing anthologies, this has become an undeniable trademark of my writing over the past twenty-five years. Writing anthologies is something I have talked about at length here over the past decade. That came about out of necessity and respect for the format.

Writing about the future is a way of dealing with what I have learned and experienced. I can't properly do that by just talking or posting about it on social media. You can get tangled up in people's opinions and beliefs and lose track of what your purpose was for bringing it up in the first place. Discussions are good and part of the process, but once you have a story that you can't stop thinking about it has to be told by you. 

Storytelling is my favorite way to express myself. I like to imagine the story, determine if it is worth spending time on and then crack on with it. That said, I have sat on stories for a decade. So, cracking on with it sometimes happens right away, but it may also mean when the time is right. The rules are always evolving with creativity. Stories often arise from observations of life or deeper research. I have sat at a cafe for a few hours and plotted out multiple stories or discovered a creative path that will take years to complete. I then proceed to follow this new path to completion years later. I like to believe that my personal barometer for determining which projects to pursue has improved through the years. Whether that is true or not may be open for debate.  

The plans I make for myself are often in an attempt to make time for creativity. When you come to realize that you are one in 8 billion people on a massive rock whirling around an invisible race track in the dark expanse of a seemingly infinite universe, you can get lost in the numbers. Storytelling is my vessel to explore it all. Whether that be through writing, acting, directing, or designing. When you walk the path of creativity you tend to become the path after a while. 

Big ideas become plans. Ideas and plans change based on the situation. Big plans become projects that get attention. Projects become accomplishments. And by the end of the year, you reflect on it all and see that progress was made. Then you make adjustments and plan for the year ahead. You have 365 days to move the ball -- a metaphor for your creative work. We are not Sisyphus, we are not being punished for our imagination. The process of creation is meant to be one of toil, yes, but also one of joyful self-expression. As painful as it can be to be rejected, creating something that would not exist without your imagination feels glorious. This year was no different. I started the year by releasing an illustrated version of the book Michaelmas. This marked my first attempt at integrating AI images into my work. After I released the book I thought I might focus on illustrating another book or work on a graphic novel.

Then in February, we got the tease from OpenAI for SORA. If I am being honest, I wish they had not teased it so early on, because my whole view on what I should be doing in regards to AI shifted. As a result of my excitement, I began thinking more about AI video possibilities instead of what I could do with AI images. 

By this time, I had already begun work on a pitch package for a TV series, something I had been tinkering with off and on for a year. Pitching the story had been in the works for a few months and I got a bit sidetracked while focused on illustrating the book. However, once I saw those first SORA videos my mind started racing. Up until then, the AI videos I had seen were easily dismissed as not up to par. But some of those SORA vids made me dream more than I should have. I was blown away like so many others. Ideas about how such a tool might be used with my TV series flooded my imagination. So, I changed the pitch package to reflect how I thought AI Video might be used in the post-production of the series. This is an example of me using the same imagination I often use to tell fictional stories about the future in the real world. The rise of AI has had me trying to predict its story arc. In my self-deluded mind, that would have seen the production of the TV series wrap in the summer or the winter of 2025 with an eye towards a 2026 release. 

At the time, I was focused on gathering interest to get the show made on film with actors. There were things we could do with AI tools once filming was done that would make the series both interactive and immersive in ways I had not imagined were possible before seeing those early SORA videos. The video above is a recent SORA video of an Open World game. I imagined two such games or levels to one game for the TV series that are edgy and reflective of the material, with a more intimate and immersive experience set in one charming Disney-esque location. Could they all be part of one larger game? Yes. However, I want people to have the option to do them separately. 

I created a robust Pitch Package, which included an in-depth Show Bible and a more concise Pitch Deck. To those who don't know anything about these, the Bible for this story is fifty-one pages in length and goes into detail about the entire series, from the pilot episode, the first season, to the entire series. It is filled with visuals and references that paint a clear picture and feel for the series. I prefer these to a treatment, in which you are essentially telling the entire story but without dialogue. They can be very effective. However, I love a creative and robust Show Bible. My current favorite is Stranger Things, originally titled Montauk. 

If you thought writing screenplays was just about the script; well, you would be mistaken. Some people (mostly writers/directors) may be able to get away with that, but when you try and sell that project you need to provide a lot more information than the script. In reality, the Show Bible and Pitch Deck for a TV series are just as important as the Script. That is because you are selling a feeling, and in the case of this series a unique vibe. It is not a traditional show with one clear beginning, middle, and end. In this case, there are multiple overlapping stories in the first season that take place in the same town over a 20-year period. 

We are emotional beings and the key to our hearts is through our feelings. But the way to the mind can be more complex. You have to touch a chord within people beyond knee-jerk emotional responses, it is more about frequency in that regard, where you know what some people like and then create something that will be in the same vein as what has worked before. Feelings are easy. That's why some say that drama is easier to do than comedy. I mean you don't see many daytime comedies.  

I have queried hundreds of people through the years, sent out pitch packages for dozens of movies and TV shows. The process is time-consuming and nothing is quite as humbling as trying to sell a spec screenplay. AI will change and is changing this process in a BIG way. 

Soon, those of us who have walked this path, and those who are drawn to it now and in the future, will not hit the same walls as those who came before -- those walls I know too well. Because we are being liberated to become the 1st generation of multimodal storytellers. 

Within a year or so all screenwriters will have options to either sell our stories the old-fashioned way or learn the skills to make the films and TV shows ourselves. However, there is an emerging new type of collaboration with a team of AI artists working on a single project. It is similar to indie films except there will be more rapid turnover. So, a traditional team may make 1-5 films during a year. Roger Corman managed to produce and/or direct an astounding 9 films a year all the way back in 1957. An AI team will be able to create a dozen or more a year, easily matching that of Corman and likely well surpassing his impressive output. You gotta think this will be a popular option. I know it is for me because I have dozens of stories ready to be told with AI tools, and new ones that are begging for attention. While I will certainly create many AI productions on my own, I love to collaborate with others on a shared project. You are your own limitations in this new paradigm. If you want to create it you will be able to do so. 

Speaking of limitations, Google just gave a few AI artists access to Veo 2.0. They teased Veo 1.0 back in May. It looked good then in the few videos they released. However, this new version is phenomenal. It is a new SOTA model and is leaps and bounds better than any other model out there, and that is with all of the other models having improved dramatically over the past year. That is saying a whole hell of a lot.

The dream for artists is to have one tool to help you with all of your creations. As of right now, even though Veo 2.0 is amazing, there is a need to use multiple tools. By the end of 2025, you've got to think that ideal tool will exist in public or in some AI video lab. 

After Sora was released I was disappointed. I mean it was great to finally have access to it after they made us wait nearly a year. Aside from the speed with which it creates the videos, the quality is the same as what we saw back in February. That means they either hit a wall or they are holding back their improvements and just focused on the UI for the rollout. 

With Sam Altman's belief in iterative deployment and Open AI's willingness to hold back Sora until the election played out, I think there is likely a much better model that they are sitting on. If not, they may have just lost the AI video war to Veo after a week. Maybe Veo 1.0 was also better than Sora. It's hard to tell by the limited examples Google released back in May. Either way, Veo 2.0 is a far more useable tool, and it has me dreaming again. I can't wait to get my hands on it.

The public release of Veo 2.0, whenever that may be, may mark the moment I begin to turn my focus to the production of a short film, and the first step towards a collaboration with others. I have been tinkering around with all the tools (except Veo 2.0, which is not publicly available) without being too focused on making anything. 

Even Veo 2.0 isn't perfect; you wouldn't be able to make a believable lifelike AI movie worth watching with it just yet. However, with Sora, Veo, and all of the other quickly-improving models the time to hone the craft of AI filmmaking is here. That way once the impossible becomes possible, we can be up to speed and ready to crank out some exciting new content. 

I figure once all the tools are good enough, which feels like we are there or nearly there for AI animation and getting closer with lifelike AI, I may be able to create several AI films on my own within a year. I'd like to start doing that in 2025 with AI animation. That way, by the end of 2025, I would be ready to collaborate with others. Why wait? I want to be able to do all of this myself before I even attempt to try and bring others on board. Who knows, maybe I will even work on other people's projects. Being able to do it all myself with the help of AI is a current dream I have, which is based on what I have seen from AI, what I have already done creatively over the past 20 years, and my eagerness to put on to screen much of what I have written. My second objective is to be able to work with others to help expedite the production of unpublished and unproduced stories in my library. This way I can also learn new techniques and improve upon what I can do on my own. The productions themselves will likely be better as a result of collaboration. Always improving is key, as is tearing down barriers instead of building them. 

I've mentioned this before, but my writing partner on the pilot episode for the TV series I pitched earlier this year said to me, after reading the pitch package I had created for the series, "If you pull it off you'll have a media empire at your fingertips." I could sense his doubts, yet I assured myself that what I had laid out was ambitious, yes, but plausible. Was I reaching into the ether for the impossible? I had kept up with all of the AI updates and knew that what I had discussed in the pitch package may be possible within a year or so. I based the AI aspect of the pitch upon what seemed to me and others would be true by the summer of 2025. 

I was seeing AI podcasts, AI games, and immersive experiences where viewers could choose their own adventure or virtually walk in the town I had set the story. I jammed a whole hell of a lot into the pitch package. The story is ideal for all of it. But unless I could help one of the great directors and producers I respect see my vision as their own, it would all never happen as I had originally envisioned. And that is what it is like to be a speculative screenwriter. Wish in one hand, and... well, you get the idea. 

By mid-June, I realized that I had just spent a quarter of the year trying to get someone else to make my story and had not gotten much traction. I started to reflect on that time and it became clear that Hollywood was not open for business. The strikes from last year, the impending crew strike, and the uncertainty of how AI would affect things had ground the business to a halt. 

That was the moment I thought back to what my writing partner had said, and I thought, why not use AI to do exactly what he had said and create a company? A media empire is beyond what I could handle on my own. So, my own AGI test will be me starting a media empire with the help of a variety of AI tools. Not with a focus solely on a TV series or film project, which is all a part of the grand plan, but also on what else I can do with AI to create a business beyond just making films and TV series. 

We have entered the realm where these AI companies will start passing benchmarks every other week. We are still at the front end of acceleration. I consider the pre-AGI period the front end because what comes after will be incredibly different. Will it be like Sam Altman's iterative deployment where things increase at a gradual pace or will we all share that GPT-4 type moment where we all agree that AGI has been achieved?

Google and OpenAI dropped a ton of updates this month. Google dropped information about Willow (Quantum Chip), Gemini 2.0 Flash Experimental, Google 2.0 Flash Thinking Experimental, and a number of other impressive updates including Veo 2.0. OpenAI, on the other hand, released o1, o1 Pro, Projects, o3 (Benchmarks), vision in Advanced Voice, and several other updates including Sora. These two companies changed the entire AI landscape with their updates. And open-source models are improving rapidly as they reverse engineer what closed-source companies are doing behind closed doors. 

My ultimate goal may be to tell stories but I also have ideas on ways to help others thanks to AI. In fact, one of the first things I thought of back in the Spring of 2023 after GPT-4 dropped was a way in which I could use AI to help others. And so, by mid-August, after spending a month focused on AI video, I began doing research on what kind of company I would want to create with the help of AI.

Other than a desire to be able to create films, TV series, graphic novels, and books with illustrations, I had the App idea that I had been kicking around for a year and a half. As I started doing early research on creating an App, I realized that I didn't have just a single App idea but several ideas for multiple Apps. 

A "media empire" sounds intimidating. However, now that I have begun work on one I can say that even with the help of AI it is a hell of a lot of work. All of these AI updates are brilliant and are making what I am doing possible, but I am interested in also finding out if Agents can help me do even more than I had planned. My timelines may shorten and my plans for 2025 may change because of new tools being made available.

During August I realized that while AI Video had made some fantastic advancements, no matter which tools I used it still looked like AI video no matter what I did. The same was true for what I was seeing others do as well. I had been working on creating a life-like trailer for the TV series but was disappointed that I could not make it look real. I have long thought animation with AI would deliver the best early results and that has proven to be true. Since August, Hailuo Minimax released an update that is great for 2-D animations. I think it is safe to say that we will get an outstanding AI-animated movie by next summer. 

While Veo 2.0 may change the landscape for AI video, it is not yet public. Whatever effect it will have will happen in 2025, and that will also push these dozen or so other companies to step up their game. The talk is about physics and how Veo 2.0 nails it and the others fall short. The fact that Google has made the first gigantic leap with the physics in Veo 2.0 is very exciting for someone like myself. That means that by the end of 2025 I will likely have created something on my own using these tools that is at least close to how I had imagined it when I wrote the story.

After seeing tools like Replit and Cursor help non-coders create Apps, I  realized that the App ideas I had back in the Spring of 2023 might be something I could create and deploy. From there things have blossomed a bit. I have already created a prototype and done early testing on the first App. 

If AI Video was capable of creating TV series and Movies that people might be willing to pay for then I might be down that hole like a number of others. Many of them are creating some amazing content, some of which is being used for music videos and ads. Cool stuff but not what I am interested in doing.  I am still not all aboard the AI Video train just yet. Even if Veo 2.0 were to be released in January, it would not change my early plans for the year. However, by April I may be open to working more on AI video. We'll see. I am open to change based on AI updates, but I won't alter my plans until it makes sense to do so. 

I had been thinking that the trailer for my TV series would be animated. Veo 2.0 may change that plan. In fact, I have three other animated projects in mind. These updates keep happening so fast. Animation seemed like the best path for 2025 until Veo 2.0 previews started to drop. It could be that by summer life-like AI Video is indistinguishable from real life. 

In the meantime, I will continue to focus on the creation of the business. It is meant to eventually help support my creative efforts. A SaaS company with a few AI-wrapped Apps is one part of the business. My creative side is of course another part of it. In between, is not necessarily me being a content creator. While there are plenty of amazing content creators out there, that is not what I want to be focused on. That said, I will be creating some content to go along with the Apps. Some learning materials as well as marketing content. I have no intention of showing my mug all over the place; I am too old to be faking a smile for you. You may be hearing a good bit of my voice though. 

It has been a good year. Plenty has been achieved. Much has been planned for the new year. Buckle up, 2025 is sure to be another eventful year. I wish you all the best in 2025 and hope you will join me on an adventure that has been a lifetime in the making. 

Thursday, November 7, 2024

Old Glory




It has been quite the year. I do not like to talk about politics on public forums. There are enough people out there who are doing that. I try to keep informed about the views from both sides. As a creative person, I try to see things from everyone's POV.

The country made a choice this Tuesday, and for good or for bad we will all have to see where it takes us over the next four years. We are all in this together after all. Thanks to our military might, the United States of America is the most powerful country in the history of the world. We have long been viewed as "the shining city on a hill" because of the opportunities available to people here. Let's all hope we can maintain our strength and remain a beacon of hope that many in the world still look up to.

As those who have kept up with my blogs will know, I embraced the changing AI landscape last year. I have prided myself on my ability to adapt to my surroundings over the years. And yet, looking back, there are times when I tend to stay in situations that I do not jive with and it has cost me. 

Last week, someone said I had midwestern sensibilities. They were referring to my preference for routines. Routines can be a good thing. My tendency is to settle into them a bit too deeply sometimes, even when they go against my best interest. It can be a bit of a flaw for me. I have stayed in situations I should have left long before I did. What can I say, I like the comfort and familiarity of routines. They have made it easier for me to settle into periods of work and creativity. 

Now is the time to turn the page on the noise of the past year. There will be no more annoying political text messages or ads in the mail. The cacophony of pleas for attention will fade to a murmur as we begin to look forward and plan for the days, weeks, months, and years ahead. America has made her choice and we will see where that takes us. 

My plans will not change, largely because I have worked so hard to create them and I am a creature of habit and routine. But also because I knew well before the elections that whatever happened on Tuesday that AI would continue to advance in all fields regardless of who won. It has been clear for a year and a half that with the billions of dollars being pumped into all of these AI companies things would continue uninterrupted on the path towards AGI. That is why I knew my plans, which don't necessarily rely on us reaching AGI, would not change as a result of the elections. I just need the Gen AI tools required to create movies and TV shows to improve just a little bit more and I'll be happy.  

I'm a simple man who loves to create stories and share them with others. Making movies and TV series has been my dream for a quarter century. I don't need fame, but I would like to share stories that I believe people might enjoy. With Gen AI that is possible. Without Gen AI it is far less likely, if not impossible. It's that simple. And like I said, I'm a simple man. The path has been clear for some time now. I'm just waiting on all the right tools to drop so I can proceed. We are getting so close. 

You can see on Twitter (X) that people are leaving in droves. Some probably stuck around just for the election. Would they have stayed if the results had been different? Maybe, maybe not. It is my favorite platform -- has been since 2010. It is still the platform for all things AI, even if it may become less of the world's town hall than it had been for nearly two decades. I need those updates so I can keep up with the changes in technology that have begun to reshape the world at an accelerated rate.

I am still debating whether to pay for a subscription. We'll see. If I feel I can start my business within the next few months, I may pull the trigger by the end of the year. I have several irons in the fire, so to speak, so I will have to see how things play out over the next few weeks. Whether I can start my business as I have been planning these past few months or not, I have every intention of using AI to create movies and TV shows. Which will eventually be its own business. However, while the tech is getting closer and closer to where I need it to be, it is not quite at the point where I can actually start producing full productions at a quality level that viewers will accept. 

At the minimum, next year I hope to create at least one short film and one trailer for a movie or TV show using only AI tools. There are plenty of good-quality AI shorts and trailers out there now. As I predicted a year ago, AI animation is leading the way. I am confident we will see an exceptional animated feature-length movie created with only AI tools within the next month or two. And viewers will be unable to tell that it was made with AI. The Pixar-style stuff looks unbelievable today. It's only a matter of time before someone puts something worthwhile together and it gets picked up by a streaming service. 

I still have a lot to learn before I can create something like that. Pixar is not really the style I would be going for, but I am not ruling out making something with an Animated style first as opposed to life-like. I think feature-length life-like content is still six months off. Minimax and Runway have some amazing new tools that you should check out if you haven't already and are interested. And I have a feeling Sora could be dropping any day now. The elections are over and there is no longer any reason to hold it back.

It is time for me to refocus my attention on my personal goals instead of stressing about the future of the country. Election day is over, and the dye has been cast. I hope we all can come together and face the future in a positive way. 

Monday, July 1, 2024

July




June was interesting. July is already intriguing and it's only the first day of the month. Buckle up! 

I spent much of June waiting for OpenAI and Google to release all of the features they had both pimped out to us in May. OpenAI did come out and say that the Voice model will gradually roll out and that most of us plebs won't see it until the fall. Whether that means after the US presidential election or not who knows. But that was not the only new feature. I also need to try the image creation capabilities they teased. Especially to help me create a graphic novel or illustrated novel. I prefer to use only a few tools to create everything I need for these image-heavy projects. I like Midjourney a lot more for image creation, but I keep hoping that OpenAI will improve either Dall-E 3 or provide a new image generation tool with better quality and more capabilities. Not sure where Google's updates are either. I especially wanted to try the video model Veo and Project Astra. Oh well. I guess this is yet another lesson in how patience is a virtue. 

While the big boys have been overpromising and underdelivering in a timely manner, we now have a few new AI video generation models to fawn over. I touched on this in my last post. However, I have had time to think on things since then. On Friday, Runway started to grant access to more people, namely those in the Creative Partners Program. While I did apply to this last week, I was too late to get access. Hopefully, I'll be allowed to join the CPP program at some point so I can get early access to future tools. After seeing what GEN-3 could do I was thrilled to see  GEN-3 Alpha rolled out to everyone today. I am all signed up and ready to start using these new tools. Perfect timing. Thanks, Runway. 

Over the past year, I was reluctant to use the existing AI video tools, something I have mentioned here several times. The quality was not good enough. My focus for part of the last year had been on AI images. Even my writing plans have been guided by the great quality of AI images and the ease with which they can be created. My main focus after the recent two-month query period for a TV series I created was meant to be on a two-part illustrated novel series and a graphic novel series. Having learned enough about creating AI images, I felt confident I could not only create illustrated novels but also graphic novels. However, with these AI video tools all dropping in the past few weeks, and more still to come, I have been forced to reconsider my immediate plans. 

Ever since last spring, I have had an eye on the AI video space with the thought of diving in once the quality reached a certain point. Sora had me dreaming, but its belated release had me focus on what I could do with AI images. If I had access to Sora in February, I would have created a trailer for the TV series to go with the pitch deck and the series bible I created for my query package. Oh well. 

I knew when I saw those early Sora videos that other companies would start to catch up. And when they did I would pivot some of my time and attention to AI video. While AI images are at a point where I can create what I need for the illustrated novel series and for the graphic novel series, I think those projects have become secondary for the next month. It is time to learn to use these AI video tools. I have been waiting so long to have this type of control over moving images once again.

It is one thing to write a story and have people read it. With a novel and illustrated novels, I still have control over what a reader sees. Whereas when screenwriting I have to rely on countless others to bring my vision to life. With AI video tools I have near total control. I say near because we are still early in the AI video space and these things are not perfect, even if they are incrementally better than what we were seeing before Sora. This reminds me of the kind of control I had back when I was making short films back in the day. Because of that, I will spend a big chunk of time in July focused on AI video and learning all I can about AI audio tools. 

The one thing I have not mentioned much about here is my desire to create an APP. I spoke with the people close to me over the past year about my desire to create it, but I wasn't sure if the APP was something that was needed because I saw others creating somewhat similar APPS or GPTs. However, I think I can make an APP that can help a lot of people and help me learn more about the process of creating an APP. I had considered making a GPT through OpenAI, but I think an APP is a better way to go, even though I will have to do a good bit of research. I think it can help more people in that format than as a GPT. 

GPTs seem to be quickly becoming a thing of the past. Microsoft is doing away with them and there are rumors that OpenAI is not as keen on them as they used to be. I want to keep learning about technology but I also want to create. I will likely lean on AI to help me build the APP, while also learning about the process. I am an artist not a martyr, so I don't mind leaning on AI for not only the image and video side of my new creative process but also some of the technical aspects of creating and launching an APP. I have learned a lot over the past year, but I cannot just sit down and crank out this APP without some guidance. 

So, I am making my main focus of July all about educating myself. Learning about AI video, AI Audio, and APP creation with AI assistance. We'll see if I can learn all that I need in one month. Maybe, maybe not, and it may be that I need to keep at it for another month or two. I'm up for the challenge. In whatever free time I have left, I'll also try and get some work done on the first book in the illustrated novel series and create a few panels in the graphic novel. Busy. Busy. Thanks for reading. 


Wednesday, June 5, 2024

Will AI Change Everything Or Will It Be The Same Only Different?



I am starting to have doubts about how much AI will benefit me as a creative person. This time last year, I was just getting my beak wet, but my imagination was soaring with how it would creatively benefit me. While I have said for a long time that I write what I want to read or see play on a screen, deep down I'd like others to take some enjoyment out of the process as well, and maybe even make some money off of the hard work I've put in and the sacrifices I have made. 

Self-publishing books has almost run its course for me. What started as a personal challenge revealed a lot about not only myself but the business of books. It is tough as hell to sell books. You first have to have a great book, and then you have to stand out and be recognized amongst all the other books. When you self-publish it is almost impossible to stand out, especially now after the newness of the self-publishing craze has died out. 

Now think about Movies. Independent movies have always been a tough sell. And now, even blockbuster movies made for 100s of millions of dollars are having a hard time breaking even. This time last year I could see how AI would give me superpowers. The thing is that it also gives everyone else creative superpowers. Most of whom have never lifted a finger in an attempt to create worlds for themselves before AI, let alone done so to entertain others. And yet, within a year or two everyone will be able to do just that. 

Those who have been keeping up with the progress of generative AI and have seen the demos for all the new products know that massive change will wash over the general public soon. Many of those in Hollywood who have been fortunate enough to get access to Sora can see that this technology will change the business forever. Those of us who have been paying attention since February know exactly what I mean. The results are stunning and will only get better.  

At that time, I was knee-deep in preparing to query Hollywood about a TV show I had been working on. But, it was plain to see that Sora would change everything once people had access to it. I didn't let that deter me from my task, contacting others about the incredible TV series package I had put together. In fact, I worked into my pitch just how I envisioned AI tools could help with the post-production marketing of the series and may even be able to augment what had been shot after things were completed. I didn't want to suggest that any new tools like Sora should be used in the production. Those things should be left up to the production team. My job is to lay out the road map for the series. All I need is one person to believe in my story and the big ideas I have laid out. "It's hard out here for a pimp."

The most important part of my process is coming up with a story and then putting in the work to write the story for others. Once that is done, this is where AI would be massively helpful to someone like myself. Most people who are not authors or screenwriters will need AI to help them write stories, therefore they will be unable to copyright them - as the copyright laws stand now. That might be my only leg up on the masses who would be able to create just about anything with a few prompts. But, if I can eliminate the need to convince others, who are busy with other projects and in constant contact with other writers who they actually know and have worked with in the past, then I could focus on actually using my storytelling skills to create a film or tv show with tools like Sora. I'd rather work with people the old-fashioned way, but they have to want to work with me as well, and I can't force people to buy my work. Unlike most people, I would be able to copyright the stories I would want to use in partnership with Sora-type tools because I would have written them, and maybe that is the window of opportunity for me. Maybe. 

With each email I send and get no response my heart breaks a little. Not for myself, don't pity the Pitters, but for the work and the people it might touch, inspire, or somehow affect. I look forward to using AI as yet another creative means to an end. It feels like I am an explorer awaiting a ship that is being built. Soon enough I'll be off exploring new lands. I've sent thousands of emails through the years. It's like water off a duck's back at this point. I know my efforts are usually in vain because they largely have been for twenty years, which is fine. That's the way it is. Sometimes you catch a break but more often than not you will have wasted time, energy and passion only to be ignored.

If I had been more involved in the business over the past twenty years I might be more conflicted about using AI tools. However, I spent much of last year during the strikes and the rise of Generative AI wrestling with my conscience about its use. And recently I have had time to reflect on all the blood, sweat, and tears I have put into projects over the past twenty years with 90% of people unwilling to even respond to an email. And I feel empowered for the first time in a long time. 

I will not hesitate to bypass people in order to get my stories in front of an audience. Hopefully, those AI tools will be available soon so I can get trained up and put them to use ASAP. I've gained a lot of experience over the past two decades and learned even more about patience. In the meantime, I'll keep writing and sending personalized emails to those whose work I respect and would love to work with in creating movies and TV shows. 

I never dreamed of having a media empire when I started writing, I have just kept plodding away at creating stories in different formats. But, with the help of AI, I may be able to create dozens of movies, TV shows, and graphic novels in a short span of time, and all of the stories will be copyrighted. There are likely thousands of people like myself who have been writing for decades and only publishing or putting onto screen a few of the works they have actually created. In fact, if the technology is good enough that will be one of my goals-- a media empire. My ambitions are usually bigger than what I can achieve but then I am a dreamer and always think big. 

Once I have actually completed my first ready-for-consumption production I will have to deal with marketing and sales. Cringe. At least I will be on the backend of production and not stalled out waiting like a jackass for people to respond to an email. I have been reluctant to call producers and directors about the TV series at this point-- even though I have a few phone numbers, but once I have a TV show or movie in the can you bet your ass I'll be on the phone using all my sales skills. Regardless, it will still be a tough task to earn people's attention.

This is where I am starting to have doubts. Not in my own abilities to tell a good story or that the technology might not be good enough. It's that the technology likely will be good enough and I believe I will be able to create all the stories I decide to pull from my library of unpublished works. However, the marketplace will become over-saturated and I will face a similar problem that I face today: getting people's attention and earning their interest. The good thing is that the work will be completed and I won't have any regrets about stories sitting on a shelf because I could not get anyone to help me make them into movies or TV shows. But they might very well be stuck on a cloud unwatched next to millions of other unwatched movies and TV shows. I'm not sure which is worse, querying dozens of people with a well-written story whose plan for the first season is ideal for a number of streaming services with only a few responses, or creating the TV series using AI and no one watches it. Both are tragic. 

I've always said that I write what I want to read or watch because it does not yet exist. But if everyone is doing that and not consuming what anyone else creates then that is pretty damn depressing. And it probably won't be good for society if we just stay in our own imaginary bubbles without taking in new information. I hope that doesn't happen. I like what other people create and I always will, but I also like what I create as well. I may create it because it is something that I would want to read or watch, but I use that as a barometer because what I really want is to create something that others might enjoy. I don't want to sit around at night and read or watch my stories. That sounds vain and fucking boring. I hope we don't become a society that sits around quasi-creating movies or TV shows with a simple prompt custom-made just for us from scratch. Storytelling is something that is shared with others, even if we may experience it on our own in our own homes. We then go talk about it with others. 

I can hear the conversations with friends in the future. "I generated and binge-watched this amazing cop show series this weekend. I programmed it to be like The Shield  and NCIS: Los Angeles." Will people even be able to send that show to their friends so they can watch it or will it just be something that you can watch? Can it be shared with the world? Who makes money off of it? Do people get quasi-famous for prompting something that they had very little actual input in? So many questions. What I fear most is the loss of shared experience, which is the point of stories going all the way back to cave art. There are things we can learn from each other that we may not have learned on our own. 

The writing of a story is the act I love most, inspired by personal experiences and what I have learned about the world. An AI will do much of the same but without the personal human experiences of its own. That seems to be the possible barrier that these tools may struggle to pass. Blade Runner touched on this subject. They may become humanlike but may never be able to become truly human. But then we may become more machine-like as we look to expand on our own mental limitations. 

Once a lot of entertainment is AI generated there may have to be a notification system informing viewers how much of a story was created by a human. This might tell a viewer or reader if it has been copyrighted and how much was manufactured by an AI-based upon a prompt or prompts. It will likely get to a point where it won't matter as much because the AI will be a better storyteller than most people. However, there will always be a need to know if what we are consuming comes from the soul of a flesh and blood person who has lived a human life or if it is from an AI that has been trained to simulate those experiences. 

It's tricky because eventually we will reach AGI and we've no idea if that will be a net good thing for society as well as storytelling or a net bad thing. Until then, I will try, however futile my efforts may be, to create the stories that do not exist that I would want to read or watch hoping that you might too. Thanks for reading. 

Friday, February 16, 2024

And then Yesterday Happened


This month I have been doing a lot of blogging because this time of year my gears are still churning out plans for the year ahead. In December, it is not always easy to take stock of the past twelve months with all that is going on during the holidays, so this time of year is also about reflection. A lot happened last year with the rise of AI and my recalibrating is an attempt to adapt to all the changes. I didn't achieve all that I wanted, but I did enough to not hate myself either. Adding AI images to a previously written book was just a few baby steps but it was something. Preparations to begin on a larger project, my first graphic novel, were well underway. And then yesterday happened. 

Before I get into what happened yesterday, one hour after I published my last blog post, let me go over another writing project that has been percolating for a few years. Not only have I been laying the groundwork to use AI to create the images for my first graphic novel, but I've also been preparing to pitch a multi-story TV Series. A series that I created a podcast for last year that would not only stand alone as part of a transmedia pitch package but also take place smack-dab in the middle of the first season. Had I not created the fictional podcast I would not have had the biggest idea of them all, something I have been wanting to do since I was a kid. Back then I read several "Choose Your Own Adventure" stories that captivated my imagination because they were different than traditional books. 

Over the past few years, there have been a few interactive stories like "Bandersnatch" that give you a couple of chances during the movie to change the course of the narrative. Netflix has also done a few other interactive shows, mainly geared towards children like those books I read as a child, but that "Black Mirror" movie reawakened something within me. My taste in the stories I like to read and watch has changed since I was a kid and so has the technology. 

While most VR goggle devices have not really moved the needle, Apple's Vision Pro has blown people away. So, as I was preparing my pitch documents for this TV project I realized that this multi-story series  I have been working on is actually ideal for a "Choose Your Own Adventure" TV Series. I will pitch it as a multi-story series but I will also present it as a candidate to be made into an interactive story as well. The fact that there are multiple stories told within one season, some of the sci-fi, supernatural, and mystery story aspects of the series, as well as the podcast worked into the story makes it a prime candidate to be a groundbreaking interactive TV series unlike anything that has come before. 

I had planned to go into even more detail about this interactive story next week, but then Open AI (the company that brought us ChatGPT and Dall-E) released a demo for their new text-to-video tool SORA. Why is it a big deal? Because, as I stated in my blog yesterday one hour before the Sora news dropped, AI Video has been stagnant for almost a year. If you could manage to get a stable render you would only get 4 seconds of video. And so we have had to suffer through unstable pieced-together 4-second clips that only people working in the same space could really appreciate. The rest of us would check out of these at the first sign of instability or once we got tired of the constant cuts. It had become annoying. 


That is why Sora is such a big deal. While we do not have access to the tool as of yet, the crew at Open AI released a few dozen videos that showcased Sora's capabilities. And at the same time ended the need for us to ever have to watch any of those dodgy 4-second videos ever again. The companies like Runway and Pika must have lost their damn minds yesterday as Sora all but ended them. Unless they have some better models that they have been holding back. But I doubt they will come close to Sora. 

Open AI took the lead in LLM chatbots. While Google narrowed the gap yesterday in the chatbots field, the introduction of Sora hints that ChatGPT5 is also about to be released and will likely blow Google out of the water. 

Sora will also be challenging Midjourney for the image creation title as well, as Sora will also be creating far more impressive static images that look even better than what Dall-E 3 was creating. Dall-E 3 is impressive but it has more limitations than MJ 6, which was released in December. 

One of the problems, if you can call it a problem, with all these tools is that there are just so many of them. 2023 was all about ChatGPT4's domination and the multitude of fantastic image generators -- Midjourney, Dall-E 3, LeonardoAI, Stable Diffusion, Firefly, and several others. 

While the video generators of 2023 were amusing they didn't really move the needle as much as the ChatBots and Image generators. Runway, Pike, Leonardo and a few others were all generating similar types of results that created those pieced together 4-second clips. Deforum is a bit different as it created longer videos where the image is constantly morphing into something else that is similar to what it was. I liked all of these to varying degrees but the Deforum content will likely rise above the others because it is unique from the others just mentioned but also different from what I have seen so far from Sora. 

Interestingly, over the holidays I started to think about video games because I was so frustrated with AI Video's limitations. Unreal Engine 5, which is mostly used for gaming, has been used in shows like The Mandalorian and Duncan Jones is using it to film Rogue Trooper. So, while I was thinking about UE5 potentially being something I might need to learn more about if I wanted to create a more realistic AI video, I saw Sam Altman's tweet about Sora. As I am looking through those videos I notice some similarities to UE5. Last evening, Tim Brooks of Open AI, one of the people who worked on creating Sora, dropped some of the research on Sora in an article entitled Video generation models as world simulators and then I realized that even though I am not a tech guy I was able to deduce with my limited knowledge on this stuff that tools like UE5 were needed to take the next step in AI video. 

I've already seen people groaning that UE5 may be a part of Sora's training data, but it does make sense. Not all of the video that Sora creates has the look of a game, but you can see the influence in some of the videos. Let's just say that this is even more exciting to me than ChatGPT or Midjourney. Tools that I can use to make so much more content than I ever could before. But with Sora, there is a real hope that I, along with millions of others, will be able to make movies. Maybe even a TV show. 

This takes me back to the 2000s when I was making short films. I stopped because I was paying for those out of my own pocket, and even though they were just short films the time and money needed to create them took a lot out of me. 

Anyone who knows me will understand what something like Sora could mean for people like me. There is a lot to learn about this new AI Video tool. You can create up to a minute of steady video. One Minute! I was saying in another blog post that until we get to 10-30 seconds of stable video making a movie was impossible. Whether it is possible or not now will depend on how much control we can have over the generations. Can we create a character in one video and have that character be consistent in the next? That is the big one. That has been the big one with AI Image generators and only recently with Scenario and other tools has that become a much simpler thing to achieve. Consistency, stability of the videos, believability of what is created, and ability to edit what has been created. I'm sure there are other things I'm overlooking right now, and I'm sure there will be plenty of flaws that may limit what can actually be created. But it is a time to hope and dream again about AI Videos.

Sora brings us closer to a truly immersive world like that in "Ready Player One." We storytellers are going to have to step up our game to meet the challenge of creating these worlds because the tech is getting a lot closer to making it possible. I am trying to rise to that challenge with a possible interactive series, some of which may be able to be augmented by content created with a tool like Sora. But I may have to aim even bigger than that, or maybe this TV Series can be altered into an even more immersive experience. Either way, I am attempting to adapt, but I still have to keep learning to keep up with these changes. 

I know I am not the only one who cannot wait to get their hands on Sora. But, I literally have a library of screenplays that are ready to be created. I know this tool will not be perfect, and I am not getting my hopes up to high because I have learned that is never a good idea, especially with these early AI tools. Even the best image generators still have major limitations. Sora will change the game of AI Video generation, and we will likely see some amazing short movies as a result. However, based on the previews some major buzz-kill issues may limit what we can do. Someone with more technical nous than me may be able to overcome those issues and even be able to create enough excitement about their project to get a theater or large platform streaming release of a movie before the end of the year. We'll see. 

My being able to tackle my library of stories may not happen until we get a few more updates in, but we are getting closer. I hear Midjourney is close to showcasing its own Video model. Exciting times to be a creative person that is for sure. Now it's time to come back down to Earth and work on the projects in front of me until Sora actually releases. Then I'm not sure how I will be able to focus on anything else, but until then I have a TV Series to pitch and a Graphic Novel to create. Thanks for reading.