![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I was shocked as I went through the source struggling to find any modules that had C. Craziness.
I was shocked as I went through the source struggling to find any modules that had C. Craziness.
https://odysee.com/ – this one is also worth checking out, Louis Rossmann even posts there.
Holy shit this is incredible. I have wanted a way to permanently hide shorts forever, thanks for sharing. Also it’s actually recommended by Mozilla which means it has active security audits on it, impressive.
He should have installed neovim with LSPs for Python/Rust/etc for intellisense and linting to really get her all hot and bothered.
Are you sure? Car jackings are up 37% in Chicago this year already. I would consider armed robbery and armed carjacking to be violent, and those are up, it’s homicides that are down it seems.
Some people live in poverty, who are not being heard, while more poverty-stricken people are being moved into their neighborhoods. These topics impact literally everyone at every demographic level so it’s easier to understand why minorities are jumping ship.
They could be more like AMD in that regard, to answer your question:
Direct contributions to Linux kernel: AMD contributes directly to the Linux kernel, providing open-source drivers like amdgpu, which supports a wide range of AMD graphics cards.
Mesa 3D Graphics Library: AMD supports the Mesa project, which implements open-source graphics drivers, including those for AMD GPUs, enhancing performance and compatibility with OpenGL and Vulkan APIs.
AMDVLK and RADV Vulkan drivers: AMD has released AMDVLK, their official open-source Vulkan driver. In addition to this, there's also RADV, an independent Mesa-based Vulkan driver for AMD GPUs.
Open Source Firmware: AMD has released open-source firmware for some of their GPUs, enabling better integration and functionality with the Linux kernel.
ROCm (Radeon Open Compute): An open-source platform providing GPU support for compute-oriented tasks, including machine learning and high-performance computing, compatible with AMD GPUs.
AMDGPU-PRO Driver: While primarily a proprietary driver, AMDGPU-PRO includes an open-source component that can be used independently, offering compatibility and performance for professional and gaming use.
X.Org Driver (xf86-video-amdgpu): An open-source X.Org driver for AMD graphics cards, providing support for 2D graphics, video acceleration, and display features.
GPUOpen: A collection of tools, libraries, and SDKs for game developers and other professionals to optimize the performance of AMD GPUs in various applications, many of which are open source.
It’s crazy how true this is yet you get downvoted for recognizing failing policies and actions. Are people really this tribal?
How does one reconcile the fact that Black and Hispanic voters are dropping off the Democratic party? Is it possibly because of failed policies? Is it possible Trump is getting more voters because of the representation of something people resonate with, versus the current status quo: measles outbreaks, welfare states, economic failures (inflation, everyone in the US is losing in this equation except the top 1%), the list goes on but the idea is still the same… an old failing man in office who needs to be removed.
deleted by creator
I think it comes down to the tens of millions of dollars that the reddit executives sold out to. It’s easy to not care when someone is throwing $100 million at you. Also: fuck spez.
There’s probably even a ‘sentiment’ tracking system to automatically remove negative comments at this point.
Am I the only one in this thread who uses VSCode + GDB together? The inspection panes and ability to breakpoint and hover over variables to drill down in them is just great, seems like everyone should set up their own c_cpp_properties.json && tasks.json files and give it a try.
I’m betting the truth is somewhere in between, models are only as good as their training data – so over time if they prune out the bad key/value pairs to increase overall quality and accuracy it should improve vastly improve every model in theory. But the sheer size of the datasets they’re using now is 1 trillion+ tokens for the larger models. Microsoft (ugh, I know) is experimenting with the “Phi 2” model which uses significantly less data to train, but focuses primarily on the quality of the dataset itself to have a 2.7 B model compete with a 7B-parameter model.
https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/
In complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.
This is likely where these models are heading to prune out superfluous, and outright incorrect training data.
Doesn’t that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.
I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.
https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small
This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.
But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.
Why would that ever even happen? What incentive does a business have to stifle its own profit margins?
You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I’m not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.
I really need to get codeium or copilot working again just to see if anything has changed in the models (I’m sure they have.)
I use AI to write code for work every day. Many different models and services, including https://ollama.ai on my own hardware. It’s useful for a developer when they can take the code and refactor it to fit into large code-bases (after fixing its inevitable broken code here and there), but it is by no means anywhere close to actually successfully writing code all on its own. Eventually maybe, but nowhere near anytime soon.
Sweet, do you have any links on how to set that up? My next goal is to set up my own lemmy.<mydomain> instance up so I can pull various things for my own aggregation. Last I tried, I had errors after the Rust compiling steps, need to try it agian.