Thursday, April 04, 2024

Are LLM amazing or simply stupid?

I watched a very relevant TED talk by Yejin Chai “Why Ai isincredibly smart and shockingly stupid”, which opens with the quote “Common sense is not so common” which comes from Voltaire around 3  centuries ago. I totally agree that the current large language models {LLM}, which many call AI (but I call Artifically Intelligent), lack common sense. This should be very obvious if you he ever used them.

Still I am finding LLMs helpful. They're sometimes amazing for cleaning up typos, especially for folks like me who struggle with dyslexia.  They're also good for fixing the weird stuff that happens when you dictate and/or use predictive text.

I've been trying out a few of the most popular ones.  To compare them, I created an informal scorecard system that tracks how well they handle different aspects of text, like key ideas and paragraph sentiments.  Here's how it works:

  • OK: This means I can use the text without any changes.
  • Reword: Sometimes the wording needs a little tweaking, to sound less know-it-all.
  • Fact Check: often some points get over-embellished
  • Wrong: clearly made up or simply wrong
  • Missing: important information left out (ignored)

So I recently was asked to speak and I outlined some ideas but in a rehersal it took 20 minutes. I recorded and timed it by dictating into a Word document (<windows key> and H) roughly 6 pages of rabbling text, lots of good stuff but… … So I asked each of ChatGPT, Claude.AI and Google’s Gemini (previously Bard)  each to summarize it into a single page.

Their score cards were not so good

I feel that it is the Dunning-Kruger effect that AI suffer most!  These AI Bots display a smug self-confidence that they know everything but show no common sense to realise how little human norms and values they actually understand.

Saturday, March 30, 2024

Finding the Ideal Surface for My 9X5 Artworks on Wooden Panels

The quest to find the ideal surfaces for my 9X5 inchartwork for the 2024 VAS Exhibition continues! I'm planning to create two pieces, one in watercolour and the other in soft pastels. To test out different surface preparations, I made three variations for each medium on some scrap plywood.

 Let's start with the watercolour surfaces:


  • W1: Just plain white acrylic, which behaved a lot like Yupo paper.
  • W2: White acrylic topped with a thin coat of regular gesso. This improved colour intensity and gave sharper hard edges compared to the acrylic-only surface.
  • W3: White acrylic with a 50/50 mix of transparent gesso and regular gesso. This surface had the most texture, but the paint went on nicely with great intensity, and it was easy to lift off.

Both W2 and W3 seem promising, except that the watercolour washes picked up brush marks on all the surfaces. For the next trial, I'll need to apply the gessos with a foam roller.

Now, onto the pastel surfaces:

  • P1: Just an old layer of Derivan background paint (China Red). This added a nice tooth to the surface, and the coloured ground made the pastel colours sing.
  • P2: White acrylic with standard gesso on top. The least successful option.
  • P3: White acrylic with the 50/50 transparent and regular gesso mix. Good coverage with a light pastel touch, but the colours looked pale.

 I tested various pastel marks, from very soft to hard, and even Conte pencils. Getting details on such small panels is going to be a challenge, and I need to experiment further with different "background" paints and "watercolour ground".

Stay tuned for more experimental fun on the road to the perfect 9X5 surfaces!

Saturday, March 23, 2024

Spambots be gone


Has anyone else discovered a sudden massive increase in the number of views being recorded on Blogger (this site)?  I was rather pleased with myself struggling to over half a million views over the past 20 years. Yes I have been using Blogger for 20 years! However, in the past month the pageviews have risen by over 100,000. Over the past week the recorded pageviews are between 2,000 & 3,500   per day. but actual pageviews, even most recent posts, seldom exceed 10 views per day.


I do suspect bad actors, possible spambots, content scrapers or maybe a hacking attempt gone wrong. I really can not see what might be achieved by such an activity, perhaps other than wasting bandwidth (which I can not see evidence of). Is it part of giving a legitimate address for a phishing scam to fool validity checks?

Ok now I have noticed. Do I try to inform Google?  Yes, but I have gotten nowhere again. Have you ever tried to contact good with a problem? y=You will know what I'm talking about. Is is another massive "I don't care" / #FAIL google!


So what can I find out about these extra views (which I might just call an attack). Well it must be automated to reach that number. Its seems to be totally originating from Hong Kong, using apple MacIntosh and Chrome, the refereeing URL being reported as other by Google Analytics. The biggest puzzle is my blogger site is recording these extra views but my actual page view seem to be remaining normally low. Are these numbers even real? Should I be concerned, given the location these pageviews seem to be originating?


Thursday, March 21, 2024

What is with the cable octopus?

 

Trying to clean up my desk inevitably runs into the USB cable tangle. Whilst USB A is pretty universal the number and variety of devices it can ad-hoc connect to a computer is amazingly diverse. Sometimes it’s used for charging, most often t will be to transfer data or connect a different peripheral temporarily.  

The downside is many use different or non-interchangeable plugs at the other end. Whilst they are all obey the USB standards,  the plugs vary to suit the devices they being connected to. I currently have 5 different device end styles. That’s quiet a tangle on my desk. So I used a spiral cord minder to group the cables together at the USB-A end. A bit of magic using a split tube, lets me keep the cable close to the computer but neatly away from my work area.

This little diagram gives an idea of many of the current plug types, but there are many more including USB-C and a range of proprietary connectors for specific devices eg. cameras, phones and watches.

Iuse pairs of recycled bread-bag ties, that I colour coded to help keep track of which of the ends match up. It nice, simple and neat. Better still no time wasting searching for the right cable.

Friday, February 23, 2024

Luminar Neo's GenAI Tools Need More Time in the Oven

Luminar Neo recently added three new generative AI tools called GenErase, GenExpand, and GenSwap, all supposedly based on generative AI technology. I’d seen a bit of hype about them and they just turned up for a 30 day trial, so I just need to play and try these new features. I have to say they clearly needed more time in development before being released to the public.

The tools are only available through Neo's subscription service presumably because they utilize cloud computing power. This means they will likely never be available to run locally on a desktop.


I was most interested in testing out GenExpand, which is supposed to let you extend the edges of an image. I do like Neo’s Panorama stitching extension but I often get bulbous, untrimmed images when stitching together handheld panoramas in Neo, so I thought GenExpand could help with that. Unfortunately, my first attempt to expand a massive 588MB panorama got stuck taking forever and then just produced blackness over the area I’d selected. Oh well, back to the drawing board.

 A smaller test image did successfully expand, but the new edge addition was blurry and grayscale. On closer inspection the horizon matched byt clouds and waves didn’t matchup well to the original.


Hoping for better luck, I tried GenSwap to insert a kangaroo into a photo. The AI clearly wasn't trained on enough Aussie animals, its a bizarre creature but “thats not a real kangaroo”. At this point, my enthusiasm was waning.


Finally, I tested GenErase to remove objects from photos. It performed decently but didn't seem much better than the standard erase tool already in Luminar Neo. Trying to erase a larger object again resulted in the tool freezing up.


In the end, while the ideas behind these new GenAI tools are intriguing, I feel they simply aren't ready for practical use. Too many bugs, glitches, and failures to finish make them more frustrating than functional. Luminar Neo would have been better served by traditional beta testing before releasing them. For now, I don't trust these tools, or for that matter many other developers' generative AI tools to deliver satisfactory results, or are my expectations too high? Maybe someday the technology will mature into something more reliable, but for now GenAI feels more like a breakable flashy toy and “trying to keep up with the Jones”.

Tuesday, February 20, 2024

Happy Birthday Flickr

Today marks the eve of Flickr's 20th anniversary, a milestone that brings back memories of its inception in 2004.  They have a nice article about their significant moments and time-line on their blog

Reflecting on my journey with Flickr, I recall how I initially turned to it as a platform to upload and share photos for this blog, which started out as "things they forgot to tell you about digital photography". It was a time when the concept of cloud storage and HTML image links felt groundbreaking. Moreover, Flickr's community features, such as groups and commenting, added depth to the experience, fostering connections among photographers worldwide. Introducing their "interestingness" algorithm brought exposure to countless talented individuals, while Flickr’s commitment to Creative Commons licensing set a standard that others struggled to match.


For me, Flickr and digital photography have been intertwined since I acquired my first privately owned digital camera in 2003. Over the past two decades, I've amassed a collection of cameras, each holding its own story. It's been a little embarrassing laying out the number of cameras and estimating the money spent! While I no longer use most of them, I cherish the memories and photographs those cameras enabled. Most are still operational, though finding the right memory card for my original Olympus Camedia or a battery charger for an early rechargeable, may prove challenging.

As a consulting geologist, I purchased a Canon Rebel DSLR for video capabilities which has served me well for filming professional videos over the years. I still use that Canon today with a tethered setup to photograph my art on a copy stand. Over time I had invested in two Pentax digital SLRs as megapixels increased into the 12-16MP range, though I seldom use those models now., there were and still are great cameras just a bit heavy.

The two sleek Olympus mirrorless cameras have re-ignited the joy, fun and passion for capturing moments. Despite the evolution of technology, these cameras continue to serve me well, each with its unique capabilities and charm.

As we celebrate Flickr's anniversary, I can't help but acknowledge how my own photography journey has evolved alongside it. While changes of ownership, lockdowns and changes to Flickr's account limitations have impacted my activity, I remain loyal to the platform, eagerly anticipating events like the upcoming worldwide photo walk in Jells Park.

To Flickr, I extend my heartfelt gratitude for two decades of inspiration and community. Here's to many more years of creativity and connection in the digital realm. Cheers from down under


Tuesday, February 13, 2024

Photography without a camera

On Saturday I attended an artist talk at the Museum of Australian Photography focused on three exhibits around “photography without a camera”. One project that really interested me was by Kate Robinson, who created images using generative AI and then made physical prints using the traditional cyanotype process. She ran a workshop on the process in the afternoon

I've been fascinated by optical illusions, like Rubin's vase which can be seen as two faces or a vase. I tried generating the illusion image through text prompts to DALL-E 2 but it didn't work, their AI evidently didn't know about the famous illusion, I just got nice vases. So I experimented with Stable Diffusion instead, knowing I could add a noisy starter image with the same prompt “photo-realistic version of Rubin vase”, this generated something closer to what I needed.



I then gave Stable Diffusion a detailed text prompt asking for “Male & Female Heads in profile facing each other, Professional photography, bokeh, natural lighting” I used the latest SDXL 1.0 model which generates very realistic images. This produced some great portraits which had sharp focus on the faces and soft, blurred backgrounds, while maintaining the illusion styling.

1. Stable Diffusion Starter
2. Stable Diffusion generated images
3. Selected image upscaled
4. Greyscale & Inverted for transparency


I picked one of the four generated AI images with good tonal range, upscaled it, and inverted it to make a negative. Then printed this negative image onto a transparency. Kate had already prepared some watercolour paper with the light-sensitive cyanotype chemicals. I sandwiched the sensitized paper with the transparency under a piece of perspex and exposed it to sunlight for about 25 minutes.

When the transparency was removed the image had magically appeared on the paper! Somewhat faded. Just like seeing that first print develop in the darkroom, under the red light. I rinsed the paper first in water and then briefly in vinegar to set the blue cyanotype tones. In only about 40 minutes, I had gone from an AI concept to a one-of-a-kind cyanotype print, all without ever using a camera!



This project showed me the creative potential in blending digital and analogue photographic methods. I'm re-excited to experiment more with generated AI images and bringing them into the real world through alternative printing processes.