PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 26th of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:27:07] <dgray> Hello! I'm working on implementing bulk writes(2.6) in a driver. I'm running into a problem where I'm sending the db command "update" with the field "updates" as an array of object, and getting back "wrong type for 'updates' field contents, expected object, found 4" and I am very confused
[00:28:30] <dgray> specifically, in my test case, I'm sending in 2 update queries in an array
[05:11:58] <chovy> how do i tell what version of mongodb i have on my mac?
[05:13:51] <Mark_> # mongo --version
[05:13:52] <Mark_> MongoDB shell version: 2.4.9
[05:13:57] <chovy> k
[05:14:02] <chovy> thanks
[05:14:11] <chovy> i din't know if shell was same version as db
[05:15:05] <Mark_> they arent guaranteed to be or anything
[05:15:10] <Mark_> but chances are you installed them from the same place
[05:15:19] <Mark_> from the same packaging system, where they would likely be the same version etc
[05:15:33] <Mark_> i.e. in deb/ubuntu youd have mongodb and mongodb-clients packages
[05:15:48] <Mark_> or mysql-server and mysql-client, same thing
[05:51:53] <yvemath> How can i make use of redis and mongodb both, as in I want to have redis running forever, and mongodb to update its documents every 1 hour or 2 from w/e is in redis memory? Is it feasible to implement?
[05:59:52] <Cache_Money> I'm creating a Mongoose schema for a Game object but can't quite figure out the stats object.. https://gist.github.com/anonymous/11312806
[11:21:03] <arussel> I'm reading mongod documentation, it says: If your mongod instance has journaling enabled, then you can use any kind of file system or volume/block level snapshot tool to create backups.
[11:22:29] <arussel> Does that mean that with journaling, using AWS snapshot is good enough ?
[11:24:44] <thearchitectnlog> r the current user (Apple), as it has no running NotificationCenter instance.
[13:36:34] <k_sze> If I want to reclaim space of a replicaSet member using db.repairDatabase(), do I have to take it out of the replica set temporarily first?
[13:52:17] <k_sze> And can *all* databases be repaired using repairDatabase? I mean, for instance, the `local` and `admin` databases.
[14:04:37] <Aswebbb> Does anyone use encryption for the data ?
[16:10:45] <airportyh> hello all. I am using full text search on mongo version 2.4.9. I've found that if you don't supply a limit, it still returns at most 100 documents. And it takes a long time to return them.
[16:11:13] <airportyh> Question 1: there a way to remove the 100 limit.
[20:23:40] <Cache_Money> I need some guidance in structuring my Mongo DB. I'm building an app that tracks stats for sports games at an individual player level. I want an API endpoint such as '/spots/[:id]/games' so that I can get all of the basketball games. Would the collections simply be titled 'basketball_games', 'baseball_games', etc.? Or would I have just one collection, 'games' and each document has a field sport with a corresponding sport_id?
[20:24:13] <Cache_Money> game documents will be structured differently depending on the sport..
[20:25:14] <Mark_> im not sure that mongo is the best tool for the job
[20:25:25] <Mark_> if you want to be able to store game -> players and players -> games
[20:25:36] <Mark_> and look them up in all sorts of associative ways
[20:26:40] <Cache_Money> Mark_: I basically have 3 main models: Players, Sports, and Games.
[20:27:09] <Cache_Money> A player will play 1 or more sports, and a have many games associated with each sport
[20:29:08] <Cache_Money> Mark_: mongo might not be the best fit but I'm even having trouble determining how to do this in a RDB -- I can't have a game table because stats for a basketball game are quite different than a golf game..
[20:29:19] <Mark_> yea
[20:29:43] <kali> Cache_Money: it always boils down to what queries your application will do the most, and how to get them to be fast
[20:29:52] <Mark_> well mongo is great for free structure, but it might not be the best tool if you want to be able to pull up like, all the players in a golf game
[20:31:09] <Cache_Money> Mark_: true. I probably won't be doing that much. I'm assuming the most common would be finding a Player, then determing which Sports they play, then returning games based on which sport they want to look at
[20:31:54] <Mark_> then player -> sports -> games is probably sane
[20:32:18] <Cache_Money> Mark_: what do you mean by that?
[20:32:23] <Mark_> a document structure
[20:33:23] <Cache_Money> So, you're saying I create 1 document (Players), which will have an array of player objects. Inside each will be an array of Sport objects. Inside each Sport will be an array of Game objects?
[20:33:24] <Mark_> players { player stats, sports { sport1: { games: { 1..n: { game_stats } }, sport2: { games: { 1..n: { game_stats } } }
[20:33:25] <Mark_> etc
[20:33:57] <Mark_> well youd call them subdocuments in mongo-speak
[20:34:02] <kali> that's a lot of duplication
[20:34:19] <Mark_> well, not if the game data was player specific
[20:34:28] <Mark_> i.e. number of assists, free throws, 3 pointers, just for that player
[20:34:36] <Mark_> im assuming general gamestats are stored elsewhere
[20:34:41] <kali> mmm true
[20:34:42] <Cache_Money> that makes sense. (I'm new to mongo). I was trying to create a Players doc, a Sports doc, and a Games doc
[20:34:54] <kali> Mark_: you're from the states ?
[20:34:57] <Mark_> yea
[20:35:07] <kali> :)
[20:35:11] <Mark_> phoenix
[20:35:36] <kali> your US sports have a lot of individual metrics...
[20:35:57] <Cache_Money> kali: where are you from?
[20:37:01] <kali> france. in soccer, rugby, handbal,... there is much less work done on players individual statistics
[20:37:31] <kali> hence my guess about stats being global team or game stats :)
[20:37:38] <Mark_> if i was doing it in an rdbms id probably have a table for each game, then a table of stats for each game id/player id, then a table of players baseline information
[20:37:45] <Mark_> and a lot of foreign keys
[20:39:53] <Mark_> i do poker stat tracking for one app
[20:40:03] <Mark_> i have PokerGames, PokerPlayers, and PokerActions in an rdbms
[20:40:32] <Cache_Money> Mark_: So, are you saying I create a Players table, a Games table, and a Stats table?
[20:40:43] <Mark_> well, probably more, since there are lots of different games
[20:40:46] <Mark_> with different fields
[20:40:55] <Mark_> now, mongo would be perfect for just general game stats of course
[20:41:05] <Mark_> its just when you start getting into player specifics theres more of a relational component
[20:41:09] <Cache_Money> instead of Stats table, I'd create a BasketballStats table, SoccerStats table, etc.?
[20:41:10] <Mark_> of course theres nothing stopping you from using both
[20:41:17] <Mark_> mongo and say.. postgres
[20:41:30] <Mark_> mongos loose document structure would be great for just simple game summaries
[20:42:47] <Mark_> very fast and easy
[20:43:11] <Mark_> then maybe you do a more relational system for players and their individual game stats and what game it was a part of (perhaps even referencing a mongo document id)
[20:43:53] <hit> hi, can someone help me with the aggregation framework??
[20:43:57] <Cache_Money> Mark_: that's kind of what I was thinking..
[20:44:53] <hit> I'm trying to do a $group operation on a big database but some of the resulting documents exceed the 16Mb limit
[20:44:56] <hit> how to solve it?
[20:47:23] <kali> hit: are you using 2.6 or 2.4 ?
[20:48:08] <hit> 2.6
[20:48:34] <kali> ok. can you show the pipeline and the error ?
[20:49:10] <kali> (on a paste service, of course)
[20:52:00] <hit> kali, here it is: http://pastebin.com/L5tb4V7b
[21:00:09] <kali> hit: mmmmm.
[21:01:21] <kali> hit: well, i think you just can't fit alls the $v from at least one of the $k
[21:02:08] <kali> hit: the result you're trying to build is too big for a mongo document. the problem goes deeper than the aggregation framework
[21:03:44] <hit> kali: ok, I was afraid of that. I'll try to remove that $k and other common ones and try again
[21:03:49] <hit> kali: thank you
[22:44:53] <thearchitectnlog> hey m getting this error when i group into out "errmsg" : "exception: Unrecognized pipeline stage name: '$out'",
[22:44:53] <thearchitectnlog> "code" : 16436,
[22:44:53] <thearchitectnlog> "ok" : 0
[22:49:51] <thearchitectnlog> plz help
[22:56:46] <thearchitectnlog> anyone here ?
[22:58:46] <hit_> thearchitectnlog: I guess you need the latest version 2.6 to use $out
[22:59:26] <cheeser> you do
[22:59:40] <cheeser> $out was added for 2.6
[23:03:42] <Auckla> Good day gentlemen.
[23:03:54] <Auckla> I have a giant flat table, 120GB in size.
[23:04:28] <Auckla> Typically, I would break up into a bunchof 6GB files or so, load them into a flat table on mysql. And run queries for the specific groups of data I want and create much more smaller manageable tables.
[23:04:43] <Auckla> I want to enter this scary vastly different world of NOSQL.
[23:04:50] <Auckla> And I am learning I need to convert data into json files.
[23:05:12] <Auckla> How does one convert large pipe delmited txt files into json files?
[23:09:53] <thearchitectnlog> also out in map reduce needs 2.6 ?
[23:11:00] <cheeser> thearchitectnlog: no. you don't.
[23:11:37] <cheeser> Auckla: you're likely going to want to change your data model. maybe you won't, but keep that in mind.
[23:12:03] <cheeser> but in any case, mongoimport supports that kind of import, iirc
[23:17:26] <thearchitectnlog> final question how do i create a super admin
[23:17:29] <thearchitectnlog> on the db
[23:36:35] <thearchitectnlog> i have a problem i am migrating from mysql to mongo db
[23:37:13] <thearchitectnlog> i want to aggregate group by id i am getting redundant values here is the script
[23:37:22] <thearchitectnlog> var a=db.autdet.aggregate([{ $group : { _id : "$author_id", language_id: { $push: "$language_id" }, name: { $push: "$name" }, nationality: { $push: "$nationality"},bio:{ $push: "$bio" } }}]);db.auts.insert(a);
[23:38:01] <thearchitectnlog> only language is different the other field m getting redundant data
[23:45:32] <thearchitectnlog> ?
[23:48:48] <metasansana> does mongodb support limits on the number of documents created in a collection?
[23:53:23] <thearchitectnlog> guys i need to group documents having same id into one document anyone can help ?
[23:54:07] <thearchitectnlog> i tried this var a=db.autdet.aggregate([{ $group : { _id : "$author_id", language_id: { $push: "$language_id" }, name: { $push: "$name" }, nationality: { $push: "$nationality"},bio:{ $push: "$bio" } }}]) it s giving redundant data
[23:55:03] <thearchitectnlog> someone here