English
Share
Sign In

The attic of Haebom

Pinned
A brief notice for subscribers
We started operating an open chat room called RUMOA. It is pw:1024. Skewered Coach, which was previously run only through recommendation, has now been opened as open beta. Please note. Thank you to those who always subscribe, and please promote it to those around you. More than 80% of visitors are non-subscribers ~_~
Haebom
😍
1
AI’s $600B Question: Balancing Capital and People
The $600B problem in the AI ​​industry: Is this really just a matter of investment scale? Or is there a bigger picture we're missing? It is time to take a deep look at why the AI ​​market is evaluated as a ‘bubble’ despite this astronomical amount of money being invested . The Paradox of Capital: High Entry Barriers and Uncertain Future Trap of astronomical costs Nvidia's latest B100 chip boasts 2.5 times better performance than the previous generation H100, but it also inevitably increases costs. The total cost of ownership (TCO) of AI infrastructure is twice the cost of purchasing the hardware. This includes hidden costs such as energy consumption, cooling systems, building rental, and network infrastructure. This cost structure acts as an almost insurmountable barrier to entry for small businesses and startups . As a result, this can stifle diversity and innovation in the AI ​​ecosystem. Imbalance between investment and profits Expectations for the potential value of AI technology are high, but it is difficult to quantify it accurately at this time. Although there are success stories like OpenAI's ChatGPT , the majority of AI projects have yet to prove profitable. These uncertainties encourage investors to take a cautious approach. Balancing short-term revenue pressures with the need for long-term technological innovation is a major challenge for companies. Technological challenges: poverty of ideas and lack of talent Lack of innovation One of the big problems in the current AI market is the overflow of similar ideas . Most companies are focusing on similar applications such as chatbots, image generation, and text summarization based on the Foundation model. This could limit the true potential of AI technology and accelerate market saturation. Serious talent shortage Compared to the rapid development of AI technology, there is a serious shortage of professional personnel who can effectively handle it. There is a severe shortage of experts worldwide who can optimize and operate high-performance AI chips and design, train, and deploy complex AI models . This is not just a quantitative issue, it is also a qualitative issue. As AI technology develops rapidly, the need for experts who can understand and apply the latest technology is increasing. Limitations of the education system The current education system cannot keep up with the rapid development in the AI ​​field. University curricula are often disconnected from actual industry requirements, and it is rare to find specialized courses that teach the practical skills needed to operate AI infrastructure . In-company retraining programs are also not sufficient. Although many companies recognize the importance of AI technology, not many provide systematic AI training programs for existing employees. Solution: Strategies for a Sustainable AI Ecosystem 1. Establishing a long-term investment strategy: When evaluating the ROI of AI projects, companies must consider various factors beyond simple financial indicators, such as technological innovation, market share expansion, and customer experience improvement . Investors should also apply more comprehensive criteria when evaluating the performance of AI companies. 2. Establishing a public-private cooperation model: The government can reduce the initial investment burden of AI companies through tax benefits, subsidies, and R&D support. At the same time, we must provide opportunities for small and medium-sized businesses and startups to utilize high-performance AI infrastructure by building national AI research centers or data centers . 3. Innovation in educational programs: Universities and educational institutions must develop curricula that reflect the rapid changes in the AI ​​field. In particular, we need to open specialized courses that focus on technologies needed in actual industrial settings, such as AI infrastructure operation, large-scale AI model training, and AI system optimization . 4. Strengthen reskilling and training within companies: Companies must develop and implement their own AI training programs. A variety of methods can be utilized, including workshops by inviting external experts, subscribing to online learning platforms, and providing opportunities to participate in AI projects. 5. Utilizing the global talent pool: A program must be established to actively attract excellent AI experts from overseas. To achieve this, efforts are needed to simplify visa issuance procedures, provide competitive compensation packages, and improve the research and development environment . 6. Development of AI-based automation tools: To reduce the demand for manpower required to operate AI infrastructure, we must invest in the development of automation tools using AI technology itself. For example, we can develop tools that automatically monitor and optimize the performance of AI models, tools that automatically detect and correct errors in AI systems, and more. Finish AI’s $600B problem is a complex challenge that goes beyond simply the scale of capital investment. It will be about finding a balance between technology, talent, education and innovation. We are now at an important inflection point in the technological revolution. If we successfully overcome this challenge, AI technology will bring groundbreaking changes to our society and economy. But the process will never be easy. New ideas, proper education, and choice and focus will be the core of this huge challenge.
Haebom
Alternative SNS to Twitter and Facebook chosen by American teenagers
The social media landscape has been changing rapidly in recent years. As users become increasingly tired of the content consumption-oriented experiences provided by existing large platforms, new forms of social networking are emerging. In particular, platforms that value forming authentic human relationships and individual tastes are attracting attention. Among them, a particularly noteworthy service is ‘Noplace’. It is rapidly emerging with competition from Wizz, Yubo, purp, LMK, etc. Among the friends I covered before, there was a service called Noplace, which recently successfully completed an investment round and ranked first in downloads in the social media sector after a recent update. Noplace: A space that values ​​individuality and connection Noplace is a social media platform that provides a space for users to freely express their daily lives and interests. The platform was founded by Tiffany Zhong and was inspired by Myspace and the early days of Facebook. We aim to encourage truly social activity between users through personalized profile pages and real-time updates. Key Features Profile customization : Noplace allows users to express themselves by freely changing the color of their profile. This goes beyond simple settings changes and serves as an important element that reveals each user's personality. What to Share : Users can share a variety of activities on their profiles, including their relationship status, the music they are currently listening to, the movie they are watching, and the book they are reading. This allows for in-depth communication with friends. Tags and interests : Noplace allows users to indicate their interests or topics through ‘stars’ tags. This helps users easily discover new people with common interests. Top 10 Friends : Reminiscent of MySpace's 'Top 8', the 'Top 10 Friends' section allows users to further strengthen their connection with their closest friends. Founding background and goals Tiffany Joong, founder of Noplace, felt that the nature of the Internet and social media had changed from social to media. “Everything is too uniform,” she said, noting that current social media platforms no longer provide true social networking. To solve these problems, she wanted to help users once again have a truly social experience with Noplace. User experience Noplace gives users the opportunity to follow their friends and find others with the same interests at the same time. You can share what you're doing on your personal profile and strengthen your connection with the community. Noplace's feed is split into a friend feed and a global feed, and all profiles remain public. We provide stricter feeds for users under 18 years of age to ensure their safety. Technical features and growth potential Noplace leverages AI technology to improve user experience. It uses AI technology instead of algorithms to provide summary and recommendation functions, and provides an environment where users around the world can communicate in one feed through a global feed. Founder Tiffany Joong likened it to “it’s like putting everyone’s brains on paper.” Noplace is a new social media platform that offers a variety of features and a truly social experience that goes beyond traditional limitations. This newly emerging platform, led by Generation Z, is raising expectations about how it will change the future of social media. It is worth paying attention to the innovative social experiences that Noplace will provide in the future.
Haebom
2
Moonshot AI unveils LLM service platform used by kimi
China's leading artificial intelligence model development companies include Jifu, Moonshot, Minimax, and Baichuan. Among them, Moonshot proved its performance and prestige by exceeding 33 trillion won in corporate value. Moonshot AI has never disclosed its own language model or number of parameters. However, they just released a service called Kimi. Like chatGPT, it is a large service that allows LLM-based chatting and search. This is a company that I personally often use and pay attention to, but this time they published an interesting paper. The paper disclosed a platform that supports LLM services called Mooncake. Of course, the service that is actually being supported live is kimi, and we are now adopting a strategy to sell this in B2B format. The core of Mooncake lies in its KVCache-centric distributed architecture. This architecture separates the pre-fill and decoding stages and implements a distributed cache utilizing CPU, DRAM, and SSD resources. In particular, we optimized the prefill node pool to effectively handle long contexts. Parallel processing is possible by dividing nodes, and VRAM usage is minimized through layer-by-layer prefill techniques. Another strength of Mooncake is its KVCache-centric scheduling algorithm. This algorithm maximizes cache reuse and optimizes batch size to increase FLOP utilization of your model. In addition, the efficiency of the overall system was increased by balancing cache hit rate and instance load. Mooncake's response strategy to overload situations is also noteworthy. We introduced a prediction-based early rejection policy and minimized resource waste through system-level load prediction. These strategies contribute significantly to maintaining system stability even under rapid load increases. Experimental results show that Mooncake increases throughput by up to 525% compared to existing methods and can handle 75% more requests under real workloads. Its excellence has been proven on various datasets such as ArXiv Summarization and L-Eval. However, what is unfortunate is that the comparison was made against LLaMa-2 70B. Why? While thinking this, it also occurred to me that this paper itself was published belatedly. (It is common to deliberately layback technology disclosure) The development of Mooncake is a significant technological advance that significantly improves the efficiency and performance of LLM services. It is significant in that it effectively solves practical problems such as processing long contexts and responding to overload situations. There is also the possibility of further development in the future through utilization of heterogeneous accelerators or improvement of KVCache compression technique. In conclusion, Mooncake is solving the major problems of LLM services through an innovative architecture centered on KVCache. Separation of prefill and decoding stages, efficient cache management, and intelligent overload response strategy are considered important technological advances that can greatly improve the scalability and efficiency of LLM services. The distributed cache implementation itself using CPU, DRAM, and SSD resources is very interesting, and the fact that SLO (Service Level Objectives) was achieved through this. If you use kimi, it's actually really good. It's great when looking for articles or information related to China. Should I say it feels like a combination of chatGPT and Perplexity?
Haebom
1
America's concern: How should we control the price of electric vehicles?
The electric vehicle industry is experiencing remarkable growth. In 2023, global electric vehicle sales increased by 35% compared to the previous year to reach 14 million units, accounting for an astonishing 18% of the entire automobile market. This growth trend is expected to continue thanks to technological advancements and changes in consumer perception. Innovations in battery technology have dramatically improved the driving range and charging speed of electric vehicles. The latest electric vehicle models offer an average driving range of more than 300 miles (approximately 480 km), and with the advancement of fast charging technology, models that take only 18 minutes to charge 10-80% have emerged. This is becoming a factor in greatly relieving ‘driving range anxiety’. From an environmental perspective, the benefits of electric vehicles are clear. According to the U.S. Environmental Protection Agency (EPA), electric vehicles emit more than 60% less greenhouse gases during their life cycle than internal combustion engine vehicles. This is equivalent to a reduction of approximately 4.6 tons of carbon dioxide per year. Electric vehicle charging infrastructure is also rapidly expanding. As of 2023, the number of public chargers in the United States will be approximately 160,000, a 40% increase compared to 2021. However, it is still insufficient and continuous investment from the government and private sector is needed. Considering the possibility of a change of administration, there may be changes in electric vehicle policy if the Trump administration returns to power. In the past, the Trump administration tended to support the fossil fuel industry, but considering the growth and job creation effects of the electric vehicle industry, it is expected that there will be an adjustment rather than a complete policy shift. The initial purchase cost of electric vehicles is still high, but is gradually improving. As of 2023, the cheapest electric vehicle model in the U.S. will cost about $27,000, 15% lower than two years ago. Additionally, electric vehicles are already competitive in terms of total cost of ownership (TCO). When owned for five years, electric vehicles cost an average of 28% less than internal combustion engine vehicles, including fuel and maintenance costs. The automobile industry's response is also quick. Major car manufacturers have announced plans to convert 40-50% of their entire lineup to electric vehicles by 2030. This suggests that more diverse and competitive electric vehicle models will come to the market. In this respect, Korea, Japan, Taiwan, etc. are very suitable markets for disseminating electric vehicles. The stress on driving distance is not as severe as in the United States or China, and the infrastructure is better than expected. The problem is the price. In this situation where subsidies have been reduced, it will be rare for consumers to readily purchase this electric car that costs more than 60 million won. It is a low-priced electric vehicle that continues to be talked about in the United States. Will Rivian, Tesla, etc. make low-cost electric cars? Or, the idea is to import low-cost electric vehicles produced in China. First of all, as of now, Elon Musk says that they have no intention of making low-priced electric cars, but rather seem to want to adopt a more high-end strategy. If you buy an electric car, what is the appropriate price? Furthermore, would you be interested if a low-priced electric vehicle from a Chinese brand entered Korea?
Haebom
1
👍
1
Japan's new challenge: plans to make freight-only roads underground
Over the past 30 years, the proliferation of online shopping has doubled the volume of small package shipments. However, by 2030, it is expected that around 30% of parcels will not be delivered due to labor shortages. This has created an urgent need for automation of logistics systems. Accordingly, an expert panel from Japan's Ministry of Land, Infrastructure, Transport and Tourism proposed the development of an automated logistics system connecting Tokyo and Osaka. This system transports goods through highway median strips or underground tunnels and is scheduled to be completed by 2034. The biggest challenge is cost. According to a survey by construction companies, the cost of building an underground tunnel would be between 7 and 80 billion yen per 10 kilometers, while a system linking Tokyo and Osaka would cost up to 3.7 trillion yen. This is a much higher cost than the 25.4 billion yen per 10 kilometers planned for the ground logistics link in 2000. “This project will not only solve the logistics crisis, but also help reduce greenhouse gas emissions. We want to quickly advance discussions on this issue,” said Tetsuo Saito, Minister of Land, Infrastructure, Transport and Tourism. Honestly, is it because of my mood that it reminds me of the Third Tokyo from Evangelion more than the environment or anything else? In Japan as well as in the United States, projects are actually underway to make existing elevated highways underground. In Korea, a recent example of major road undergrounding is the Dongtan section of the Gyeongbu Expressway. On March 28, 2024, the 4.7km section between the Dongtan Junction and Giheung Dongtan Interchange of the Gyeongbu Expressway was straightened, and 1.2km of the section was made underground. This is Korea's first urban highway undergrounding project, and was completed after seven years of construction. I think the method of determining the use of the road rather than simply securing space and preserving the environment by putting the road underground is impressive. The idea of ​​creating a separate passageway rather than a car road seems somewhat cartoonish. Personally, I think of the undersea tunnel that connects Japan and Korea, and the Asian Highway that connects to Turkicye. Do you remember any cattails? lol
Haebom
The first official brand video created with OpenAI's video creation model Sora has been released.
ToysRUs is a global toy specialty store brought to Korea by Lotte and located in Lotte department stores and malls. This brand film was created in collaboration with Native Foreign using OpenAI's Sora, and the video content shows a young Charles Lazarus, founder of Toys R Us, and Geoffrey the Giraffe, our iconic brand and beloved mascot, in the early 1930s. This is a brand film that depicts people watching and dreaming. Sora is a service that allows you to create videos up to one minute in length containing realistic scenes and multiple characters created through text instructions, and is currently only accessible through partners. I'm very impressed that this is the first "official" branded film produced using Sora.
Haebom
1
Is language a tool for communication or the basis of thought?
After writing the last article, I talked with my friends about what language is and what thoughts are. Then I came across an interesting article. For a long time, philosophers have pondered the purpose of language. Plato emphasized the importance of language, saying that thinking is “an internal conversation between the soul and oneself.” Modern scholars such as Chomsky have also argued that language is essential for thinking and reasoning. However, recent research results in the neuroscience community contradict this. A representative example is Dr. Evelina Fedorenko, a cognitive neuroscientist at MIT. She scanned people's brains with fMRI and discovered something interesting. This means that the brain area that processes language is not activated during thinking or problem solving. “We couldn’t find any evidence that language was necessary for thinking,” says Dr. Fedorenko. Research results in brain-injured patients also support this. It is said that even aphasic patients who lost their language skills had no difficulty solving math problems or playing chess. These studies show that language is not essential for thinking. So what is the main purpose of language? Dr. Fedorenko firmly says ‘communication’. Research results support this, showing that words that are used more frequently are shorter in length, and that grammar rules tend to place words close in meaning. It seems as if language has evolved to optimize information transfer. Now, shall we move on to the story of LLM, which has emerged as a big player in the artificial intelligence world these days? Thanks to learning from vast amounts of text, LLMs speak the language as fluently as humans. It seems as if it perfectly imitates the human ‘language network’. However, their reasoning and problem-solving skills are lacking. Considering Dr. Fedorenko's theory that language and thinking are separate, this may be natural. You can say that LLM mastered the language tool well, but failed to develop the ability to think using that tool. Of course, this study does not denigrate the value of language. As Professor Guy Dove of the University of Louisville says, “Language can be a tool to improve thinking.” When we think about democracy, we often think of conversations about democracy, right? Language network: Specific brain regions were activated when participants performed language tasks. This area has remained in the same location over time. Thinking network: Different brain regions were activated when the same participants solved puzzles or did different thinking tasks. But the linguistic network remained quiet. People with aphasia: People whose language networks were damaged by brain damage were still able to perform tasks such as doing arithmetic or playing chess. This was further evidence that language is not essential for thinking. The important thing is that language is not the only or essential tool for thinking. We can and have thought in much more fundamental ways. This seems self-evident, as babies struggle to solve problems and understand the world even before they learn language. Watching the LLM makes me think again about what unique abilities only humans have. The power to go beyond simply exchanging information, but to creatively combine that information and create new knowledge. In order for humans and artificial intelligence to grow together in the future, won't we need both wings, 'language' and 'thinking'?
Haebom
👍
1
Good land to build a data center
With the recent explosive development of artificial intelligence (AI) technology, the global data center market is growing rapidly. But what about our country? Unfortunately, Korea is lagging behind in this important trend. Let's take a look at why this is happening and what the problem is. Personally, I think this issue is very important. Did you know that global big tech companies such as Google, AWS, Microsoft, and Apple are racing to build large-scale data centers in Asia? In particular, Malaysia and Taiwan stand out in this competition. Apple announced that it would build a data center worth 4.3 trillion won in northern Taiwan, and Google has already invested 1.7 trillion won. AWS is also said to be investing billions of dollars over the next 15 years. But why did these companies choose Taiwan rather than Korea? The reason is simple. This is because Taiwan is responsible for a significant portion of the world's semiconductor production and occupies a key position in the global semiconductor supply chain. In addition, the Taiwanese government is attracting these companies through active policies such as the 'Asia Silicon Valley Development Plan'. So why is our country falling behind in this competition? The problem is multiple. First, there is a power supply problem . Although Korea's power infrastructure is said to be well-equipped, it is actually insufficient to handle the rapidly increasing power demand from data centers. It is said that data center power consumption will more than double by 2026[5], but there are limits to current power supply capabilities. In particular, in metropolitan areas such as Seoul and Gyeonggi-do, electricity demand is already saturated, making it difficult to build additional large-scale power consumption facilities. Second, there is the issue of securing land . In Korea, the land area is small and the population density is high, so it is not easy to find land to build a large-scale data center. As everything is concentrated in the metropolitan area, it has become more difficult to build a data center in an appropriate location. Third, there is a regulatory environment issue . Our country's data management and personal information protection regulations are more stringent than other countries. This is what is holding global companies back. Lastly, the biggest problem is the ‘network fee’ system. Network usage fees in our country are ridiculously high compared to other countries. This is clearly demonstrated by the fact that Twitch revealed that network fees in Korea are 10 times higher than in other countries. Twitch eventually withdrew from the Korean market because of this. If this situation continues, our country will have no choice but to continue to lag behind in the competition for core infrastructure in the AI ​​era. Then what should I do? First of all, the government must urgently pursue power infrastructure expansion and regional distribution policies. In addition, we must establish active support policies to attract data centers and improve the regulatory environment to meet global standards. In particular, the network usage fee issue must be resolved immediately. We need to create a reasonable network usage fee system that meets international standards and establish a balanced cooperative relationship between global companies and domestic telecommunications companies. Data centers in the AI ​​era are not just corporate assets. It is an important infrastructure that is directly related to national competitiveness. In order for our country to become an AI powerhouse, the government, businesses, and academia must join forces to develop and implement a comprehensive strategy. Otherwise, we will have no choice but to fall into a peripheral country in the AI ​​era. To blame labor costs, there is nothing to say about data centers being built in Taiwan and Japan, and to talk about land prices or infrastructure, there is nothing more to say about them being built in countries such as Malaysia and Thailand, which have relatively weaker infrastructure than Korea. If geopolitical reasons are the problem, there are actually Taiwan, which is under threat from China, and Malaysia and Vietnam, which are friendly with China and Russia. But wait a minute: is attracting a data center always a good thing? Looking at Singapore's case, I have doubts. Although Singapore has over 70 data centers, it is restricting new data center permits. Why is that so? Data centers use enormous amounts of electricity and water. Reporter Lee Bong-ryeol's special article shows that in Singapore, data centers consume 7% of total electricity consumption. In addition, although data centers occupy a large area of ​​land, they do not require many people to operate, so the employment effect is not significant. Because of these problems, Ireland, the US state of Virginia, and the European Union are also strengthening regulations on data centers. Korea Electric Power Corporation is also preventing new data centers from being built in the metropolitan area starting this year. What choice should we make in this situation? Rather than unconditionally attracting data centers, a cautious approach seems necessary. The government must step in and set regulations related to data centers and distinguish between what is essential for domestic information security and what is unnecessary. In addition, measures must be taken to ensure that the energy used to operate data centers is renewable energy rather than fossil fuels. Personally, I still think that it is necessary to attract data centers domestically. Even if it is not necessarily an attraction, I think that at least a domestic company or a national level can at least benefit from this competition if it is made to meet global standards. There are various reasons, such as environmental reasons and institutional reasons, but what can we do? You should think about it once. (Of course, there are small regions in Korea such as Amazon/Google/Meta. However, they are very small in scale and most of them ‘exist’ due to personal information protection laws, etc.)
Haebom
1
The era of drones is fast approaching due to war and AI.
Recently, Elon Musk posted a tweet. It said, “The war of the future will be a drone war.” This story is more interesting than you think. The original article said that drone technology is developing rapidly due to the war in Ukraine. The most impressive part is that a $400 drone destroys hundreds of millions of dollars worth of tanks. Commonly called suicide drones, they literally crash into their targets and self-destruct. This example from the Ukraine war illustrates how quickly drone technology is advancing and how it could change the face of warfare. The idea that low-cost drones could neutralize cutting-edge weapons systems would have presented military strategists with new concerns. However, the power of drones does not lie simply in their cost-effectiveness. As artificial intelligence (AI) technology is applied to drones, their potential is growing further. Drones equipped with AI will be able to independently analyze battlefield situations, identify targets, and determine the optimal attack route. This means that complex missions can be performed without the intervention of a human pilot. These developments in drone and AI technology will have a significant impact on our society as a whole, beyond the military field. Drones are already being used in various fields such as agriculture, logistics, and entertainment. As AI technology becomes more advanced, the scope of drone use will further expand. However, at the same time, concerns are growing about side effects such as privacy infringement, security threats, and job loss. As the computers and internet that we use now are all technologies that were rapidly developed and distributed to the public due to war, drones will also be with us in the future if we keep our wits about them. Even in Korea, there is a continuing movement to use UAM as a means of transportation. In the end, it's all about regulation. Even now, cases of drones being flown in no-drone flying zones are frequently reported, and as access to drones becomes easier, it seems that we will have to fight over regulation and distribution. I would also like to talk about why technological progress occurs so quickly due to wars, etc., but I don't think it's something to discuss in this article. If you look at the link above, you can see a scene where a bomb is attached to a really low-cost drone and destroys an opponent's trench or tank. It makes me think that in the future, the war itself may become a battle between drones without casualties. It also reminds me of the B-1 robot from the Star Wars series that I saw when I was young. (It is not a drone, but the robots below carry out the war instead.)
Haebom
👍
2