• Skullgrid@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    3 days ago

    You deserve a better reply and I will write you one later, but…

    Arctopus fucking rules but this take is hot as fuck my dude.

    Bro, the song skullgrid was generated by a Java program that Colin and a programmer friend of his, possibly Mike , was working on. In 200X. You’ve been rocking out to AI generated music before you even realised. Brian Eno also had some music that was meant to be generated by an automated system, and afaik, so did John cage.

    Edit : https://colinmarston.bandcamp.com/album/computer-music-2003-2004

    the sounds were created in Wire, a program using Jsyn written by Phil Burk, which is a java-based synthesis engine. i created the scores in JMSLscore (a java-based scoring program by Nick Didkovsky). this allowed me to access sounds i created from scratch, but organize them with a traditional musical staff.
    “Fore” became the warr guitar part for the song “Skullgrid” from BTA. the first 41 seconds: beholdthearctopus.bandcamp.com/album/skullgrid

      • Skullgrid@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I remember talking to the guy that recorded that song about the composition of that song, and the evidence trail in this message. So yeah, it was generated by a program that is meant to compose music based on loose parameters or something you put into it instead of choosing where all the notes go.

        And no, I’m not saying Bach did AI work.

    • UniversalBasicJustice@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      I created the scores in JMSLscore

      He created the score. If you equate using scoring software, MIDI and synths to creating slop with genAI we’re done here my dude.

      • Skullgrid@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        now, not to be an asshole, but I remember 1) talking to Colin before a Gorguts show in 2014 ish and 2) Hearing it confirmed in an interview.

        Oh also, in the documentation for JMSL (Java Music Specification Language) its puropuse is to :

        It is suited for algorithmic composition, live performance, and intelligent instrument design. At its heart is a polymorphic hierarchical scheduler, which means Java objects that are different from one another can be scheduled together and interact with each other in conceptually clean and powerful ways.

        JMSL’s open-ended nature will reward your programming efforts and your creativity by offering you a rich toolkit for making music.

        Just to beat on this idea a bit more, with JMSL you can make music based on experimental music theory, statistical processes, any algorithms you can implement… you can notate that music using JMSL Score, or leave it in the abstract. You can use Java’s networking tools to grab data off the Internet and sonify it. You can _________ (fill in the blank and start slingin’ code).

        If you want to open a window with standard music staff notation and start entering notes, JMSL Score will let you do that as well. Straight out of the box. Later you can start writing your own custom note transformations, or generating musical material automatically, which JMSL Score will notate for you. Of course all music generated for and within JMSL Score can be mouse-edited, and transformed again!

        https://www.algomusic.com/jmsl/download.html

        JMSL_v2_20250209\JMSL_v2_20250209\html

        • UniversalBasicJustice@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 days ago

          I see where you’re coming from and will concede JMSL’s ability to algorithmically create music.

          I still maintain an artist using that or similar software (Guitar Pro, etc.) to translate their own ideas into a more manipulateable form for composing/practicing is fundamentally different from prompting a genAI that has been trained on ideas stolen from actual artists.

          That said, music written via formula to cater to the lowest common denominator and generate the greatest possible monetary return is certainly closer to how genAI is/will be used, but the human element involved in writing, recording, and performing that music still distinguishes it from the sort of slop showing up on Spotify. AI generated works are an exemplar of derivative beyond that of even the blandest pop. The only human involved is the prompt writer at best; lyrics, melody, the recording itself are statistical approximations and entirely devoid of human creativity and that is an utter tragedy.

          I’d much rather the record companies be replaced with systems that don’t alienate the artists from their labor and creativity. Embracing slop is playing into the execs hands and removes all artistic merit from the process.

          Quick edit; Generated slop training on generated slop is already a problem and will get exponentially as more platforms are flooded with it. That will only alienate and divorce it even further from reality. It will only get worse.

          • Skullgrid@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Here’s the time I’m devoting to your deserved response to both posts you made. Sorry about the delay.

            I see where you’re coming from and will concede JMSL’s ability to algorithmically create music.

            I still maintain an artist using that or similar software (Guitar Pro, etc.) to translate their own ideas into a more manipulateable form for composing/practicing is fundamentally different from prompting a genAI that has been trained on ideas stolen from actual artists.

            Yep yep, true fax.

            Quick edit; Generated slop training on generated slop is already a problem and will get exponentially as more platforms are flooded with it. That will only alienate and divorce it even further from reality. It will only get worse.

            Yep yep. Feedback loops in neural nets are bad business.

            Wouldn’t you prefer the ‘actual’ musicians making ‘actual’ music be recognized instead of being buried even further under exponentially growing pools of emotionless notes arranged into emotionless music? The musicians you and I both appreciate for their creativity and skill are having that skill and creativity stolen from them and you’re cool with it because pop doesn’t innovate?

            Right. “buried even further”. It has pained me for about 20 years that the greatest musicians of our time are just fucking left to rot away on random ass jobs and release one or two fucking albums here and there, hold down meaningless day jobs and the corporate shitshovels rake in the big bucks and dictate what music is. The damage has already been done before any computers, AI, or anything of the sort entered the picture.

            The article is about how there’s generic AI music on spotify. Before that, there has been a fuckload of generic human music on spotify; but when you said the generic shitty human music was generic, shitty and soulless, you got painted as some heretical elitist. I dunno, maybe I’m too millenial , since while Estradasphere was ignored, Igorrr is at least playing big festivals to big crowds and recognized in places.

            The damage has been done already, and I am getting mad at people blaming new tools based on existing compositional ideas for the human failures that have existed, and will continue to exist as long as there are human beings.

            Record execs have leeched off actual creativity for a solid century now and you want to end them with an even more soulless product that still doesn’t pay artists?

            I don’t particularly want to hear shitty generic AI music any more than I want to hear shitty generic human music. It’s all the same bottom of the barrel crap. Artists have been complaing about getting no money for working with record labels already. No one’s getting fucking paid to begin with.

            The take home message for me isn’t “The machines are killing us” it’s “Man is a wolf to man” and blaming tools is distracting us from the actual eternal message and truth. It’s not the fault of an uranium ore that it was used to bomb Nagasaki instead of power a nuclear plant; or improperly disposed of and caused cancer. It’s people that are bombing each other and giving each other diseases. The war in Israel/Palestine/North of Ireland/Northern Ireland isn’t religion; it’s desire for resources and the minds of people.

            The only reason the song you linked is ‘imaginative’ is because a real human already imagined it only to have it tossed into the slop pile for a computer to root through. Wouldn’t you prefer the ‘actual’ musicians making ‘actual’ music be recognized instead of being buried even further under exponentially growing pools of emotionless notes arranged into emotionless music? The musicians you and I both appreciate for their creativity and skill are having that skill and creativity stolen from them and you’re cool with it because pop doesn’t innovate?

            Right, but there’s many instances of art being made like that and then shoved into museums. Sometimes it’s compositionally interesting to set up a system and try to coerce emergent behaviours, or to create a loose system with randomised parameters and a rough idea of what you want so that it’s different every time.

            Sometimes it’s nice to listen to a hand crafted masterpiece where every meticulous detail has been laboured over for years and years.

            I don’t want AI to displace human made music, far from it. I love music, I make music, and I want my favourite musicians to be able to make ends meet.

            But I don’t want to cloud my objective judgement and say “this is a robot, it has no feelings therefore no one is expressing anything”. The roots and processes that led to the AI have a foundation in both human artistic (music, visual arts) and mechanical (mathematics, programming) creativity and vision. And not just “programmer wrote neural net, fed it human music”.

            Like I mentioned various times throughout these rambling, asenine posts, mechanical composition and using chance in music generation have been in place for fucking ages. Eno, Cage, Russolo and I’m sure someone, some point in history has set up a musical performance near birds on purpose to have them accompany.

            People have made art through collages and songs through sampling and distoring samples. Programmers have been writing procedural music generation for a long time; I have no idea since when, but I’m sure it has existed; dynamic music in games has existed since the 00s at least, probably longer.

            I think, and I need to emphasise this : in a non mass market, corporate media way, these AIs are a way of experimenting with music, and having fun, seeing what comes out when you change parameters. These things are meant to have the entirety of human creative works (or close enough to it) lying in them, surely you can get some interesting things to come out of it if you fiddle it enough and then think about it.

            That said, music written via formula to cater to the lowest common denominator and generate the greatest possible monetary return is certainly closer to how genAI is/will be used, but the human element involved in writing, recording, and performing that music still distinguishes it from the sort of slop showing up on Spotify.

            Yeah, and it sucked when it was some random pop musician getting paid through the ass for no fucking reason than being a product. You know what, it’s worse with a person, because it sets the bar of musicality through the floor and says “this is what people can aspire to do : the bare minimum and it will yield the best life for them. No need to try harder for music! Just replay the same fucking chord progression and you too can make millons.”. At least if it’s a shitty AI song, everything is transparently stupid. “Of course it made #1, it was mathematically engineered to offend no one and make no statements in a computer”. Removing the human element at least lays bare the transparent money trap the generated sound is.

            AI generated works are an exemplar of derivative beyond that of even the blandest pop. The only human involved is the prompt writer at best; lyrics, melody, the recording itself are statistical approximations and entirely devoid of human creativity and that is an utter tragedy.

            But that’s the thing man, it all boils down to what the prompt writer is doing. If they put in “make new pop song” than yeah, of course it’s going to sound like nothing. If they actually take the time to write out a clearer vision of metaphors and feelings and ideas, perhaps it will come out with something better and unexpected; For example, Shining does some pretty cool saxophone tricks, but I have never heard his voice actually turn into the saxophone and back again the way I heard it in a passage in Pho Que. These kinds of accidents and emergent behaviours can come about because of glitches or the vision of the person controlling the music generation. It can be done by hand in regular music creation, sure, but the AI generated music can do fun cool things as well that can inspire.

            Again, I don’t want AI music to replace human music. But I think it’s a really simplistic way of looking at things whenever I see “AI bad” all over lemmy. I mean, I also don’t like seeing the fake bands pop up on YT and spotify, especially when they pretend they are “real musicans”, especially when what they are doing has already been done by people and now they are overshadowing the actual people.

            But that’s not the fucking neural net doing it; it’s some asshole trying to make a quick buck, or someone who had an idea that they got carried away with. We’re back to the central theme : “Man is a wolf to man” .

            Anyway, that’s it for now, I have some other ideas I wanted to bring up but couldn’t find a good place to crowbar them in (The evolution of Shining’s sound for example). I hope you read this and thank you if you did. I value your opinion and I’m glad to run into someone else into BTA and heavy music with saxophones.

            I have felt for a long time now that it was a shame that Rock ‘n’ Roll stopped having saxophones in it. Some bands are bringing it back, but some of them use it too smoothly in serene passages wheras I want the saxophones to screech and make hell noises over heavy guitars.

            • UniversalBasicJustice@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              18 hours ago

              Hey dude, I do appreciate the thorough and thoughtful response. I respect where you’re coming from and largely agree with most of what you’ve said, but I think a key point where we differ lies in our respective interpretations of homo homini lupus.

              I’ll have to simmer on it for awhile myself to develop a more nuanced argument, but my first instinct is two-fold; there have always been wolves among us eager to rip our throats out, yet humanity continues to grow and progress despite (and frankly to spite) them. Man may be a wolf to man, but ape together strong and we are capable of overcoming the wolves in our midst.

              Secondly, I think you’re letting the misanthropy in too deep. I think the injustices you and I and many others are witnessing, both in the music industry and the world at large, are the product of small men wielding outsized control, power and influence over those without. The music industry is a prime example of capital alienating the worker from their labor and that, I think, is the true root of the issues informing your disillusionment. I won’t go too deep down the anti-capitalism rabbit hole (this time) but I will pose to you a question; who benefits more from generative AI, the wolves or the apes?

              As I said, I’ve got more thoughts to distill into coherency but that must needs happen in the light of day, not the grey of dawn.

              That said, I’m absolutely capable of coherency when it comes to discussing heavy avant-garde and especially saxophone. Frankly you are likely the only person I’ve interacted with who is capable of appreciating this story;

              I emailed Jørgen about…15 years ago at this point? After Ihsahn released After but likely before/around Blackjazz dropping. I was playing tenor sax in high school at the time and wanted to know if he’d transcribed any of his work on After. He replied! And told me “Sorry I made it all up on the spot” which was disappointing but also made me respect him all the more. I was on the barrier for the Nightside Eclipse anniversary tour a few months ago when I recognized the man setting the synth up. You should’ve seen the doubletake he did when I called out his name! Emperor obviously slayed, but that little interaction with Jørgen is my favorite memory of that night. His work has inspired me for a long time and I absolutely agree that saxophone as well as other traditionally orchestral/symphonic instrumentation is capable of fitting into metal and other non-traditional genres. Ne Obliviscaris is another prime example with their violinist.

              The level of deconstruction (shoutout Hevy Devy) required to dismantle a genre and piece it back together in new ways, with new instruments and concepts and to make it all coherent is human creativity at it’s finest. You mentioned the sax to voice transition in Pho Que as an example of emergent creativity, but I posit (and insist) that those pieces already existed within the realm of humanities creativity. Vocoders have been in use in some form or another for a very very long time (Cynic’s Focus being my personal favorite) and connecting the dots between them and saxophone isn’t much of a leap IMO.

              True musical innovation, true creativity comes from connecting much more disparate dots in new ways and our current socioeconomic systems are more than happy to replace those qualities with generative AI trained on stolen human creativity in order to line their pockets further. You’re absolutely correct that the machines aren’t killing us; their owners, however, are more than happy to build orphan-crushing machines. The wolves are absolutely the problem and they are shoveling slop at us to make us slow, stupid, and fat. Don’t fall for it.

    • Übercomplicated@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      That is by no means AI generated, and certainly not by today’s understanding of the term. If I write a score and design an instrument (or sound, etc), that is still a creative process. Brian Eno literally created ambient music with algorithms like that, but it is still his creative work.

      My point is just computer-generated ≠ ai-generated in general discourse.

      • Skullgrid@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        What’s the difference? Why isn’t it seen as a collaboration between the person writing the prompt (using a scripting language) and the programmer/designer of the generation software and curator of the Data set?

        • Übercomplicated@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          3 days ago

          I’m not sure I entirely follow you (I’m only half awake, sorry), but programmed music is only generated by computers insofar the computer is generating 44100 samples every second based on a set of mathematical rules the composer made. AI music is generated based on huge datasets and probability; the composer has very little to no specific control.

          If I program a instrument/synth in Supercollidor or Pure Data or some hardware synth, and then sample the instrument/synth or create and sequence a melody for it on my MIDI (piano) keyboard or Schism Tracker, etc., I have complete and absolute control over everything, down to the very waveform. In that case I am truly and purely the creator of the piece.

          If I type in a prompt, I am just playing a probability lottery. I have done jack shit more than describing a piece of music.

          I might have misunderstood you though. For now, I’m going to bed. Good night!

          • Skullgrid@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 day ago

            First read this post I already wrote (TL;DR : the software he used was software for generating music, not writing a composition) : https://lemmy.world/post/32532002/18110617

            Second, apologies for mixing metaphors and rambling about many things at the same time that are similar. The callout about synthesisers is about them reducing band sizes, not about composition.

            If I program a instrument/synth in Supercollidor or Pure Data or some hardware synth, and then sample the instrument/synth or create and sequence a melody for it on my MIDI (piano) keyboard or Schism Tracker, etc., I have complete and absolute control over everything, down to the very waveform. In that case I am truly and purely the creator of the piece.

            I agree with this complely.

            Brian Eno literally created ambient music with algorithms like that, but it is still his creative work.>

            If I type in a prompt, I am just playing a probability lottery. I have done jack shit more than describing a piece of music.

            This is the center of the kind of point I’m trying to get to with John Cage and Brian Eno. They made partially completed pieces that were to be re-assembled algorhymically by machines later on, giving up complete creation of authorship to an inhuman entity. Tie that to :

            Why isn’t it seen as a collaboration between the person writing the prompt (using a scripting language) and the programmer/designer of the generation software and curator of the Data set?

            There are very structured and algorithm based ways of writing music , that can be automated in a computer, and varied with parameters. Composers/programmers were already doing this in experimental music. They do , and continue to do it in video games with dynamic soundtracks that react to combat intensity. What’s the difference between these and writing a prompt that says “generate a 3 minute song based off a popular jazz standard chord progression, instead of saxophones use Neys, instead of a double bass use a church organ, make the drums sound like they are from the 1900s, use the song stucture of intro-chorus-interlude-chorus-solo-outro , set the tempo to 150 bpm but make it get faster between structure changes, and use 4/4 time for the intro, swing in the rest.”

            Do I have to go record several pieces of music or have a synthesiser play through variations of the chord progressions, cut them up and roll a dice by hand for it to be my composition?

            The core idea is that record executives have been doing that since pop music existed. No one involved in the process wanted anything other than to make money. It’s the same fucking picture. No one gave a damn before, but now they do. It seems stupid to not care back then.