PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 25th of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:06:02] <patrickod> why is there no out: option with aggregation?
[01:06:10] <patrickod> it's almost useless with a 16mb limit
[01:38:37] <Max-P> Hi, I need to query for the last element of an array in a document, how can I do that? I need to find the immediate childs in a tree (all parents are in an ordered array)
[02:37:49] <niriven> arrgh, mongo.
[02:37:55] <niriven> i so badly want it to work :(
[02:40:34] <niriven> so a long story short, i get an assignment to import 151 million events, and 1075 users into mongodb. the event may or may not 'relate' to a user (4% of the events acutally relate.) so i start by inserting all events related to a user in in event_a, and all events not related to a user in event_u, and all user into user. great, works fine, until have to do some really complicated queries (eg. find users that have events of type a b
[02:41:44] <niriven> so i have to join in code, fair enough, but then things no longer become optimized even though things are indexed. I have to query id's up front that match critera a, and ids tha tmatch criteria b, then query mongo and do some logic in code. so i figure, why not reprepsent everything as a user object? {user {events}}, great, until indexing crashes!
[02:41:46] <niriven> blah.
[02:43:21] <niriven> creating indexes crashed mongo, sometimes it did, sometimes it didnt, thought i ran out of memory, grabbed an ec2 instance with 16 gigs of memory instead of 8 gigs, and high io, another crash.
[06:37:26] <sunfmin> Hello
[06:37:32] <sunfmin> I have a object like this: http://pastie.org/4795916
[06:38:06] <sunfmin> How can I use $pull to remove the object that "apples" 's name is "App2" ?
[06:38:20] <wereHamster> sunfmin: what have you tried so far?
[06:38:34] <sunfmin> db.bbb.update({"name": "Felix"}, {"$pull": {"apples": {"$where": "this.name=='App2'"}}}) ?
[06:38:46] <sunfmin> db.bbb.update({"name": "Felix"}, {"$pull": {"apples.name": "App2"}})
[06:38:54] <sunfmin> db.bbb.update({"name": "Felix"}, {"$pull": {"apples": {"name": "App2"}}})
[06:39:01] <sunfmin> all doesn't work.
[06:41:21] <wereHamster> the document name is Felix211, not Felix
[06:42:08] <sunfmin> yeah, it should be Felix, I tried it correctly, but pastied here wrong
[06:42:22] <wereHamster> your third update() works here
[06:42:29] <wereHamster> (when I use the correct condition)
[06:43:12] <sunfmin> yeah, it worked… my condition is wrong...
[06:43:36] <sunfmin> wereHamster: I tried it again, it worked, thanks!
[07:22:04] <coalado> I'd like to do a querry like 'find all documents where doc.myField contains "abc" doc.myField contains text.
[07:22:14] <coalado> is regex the only option to do this?
[07:22:52] <wereHamster> yes
[07:23:01] <wereHamster> well, or use $where
[07:24:12] <coalado> thanks
[07:37:18] <[AD]Turbo> hola
[08:38:18] <Signum> I start to like the new aggregation feature in 2.1/2.2. But has anyone understood how to group on nested data? i.e. http://pastebin.com/9DsetKSf
[08:45:15] <NodeX> $distributions.debian.maintainer_email
[08:46:06] <Signum> NodeX: Wow, indeed, that worked. Why the "$"?
[08:46:35] <NodeX> that's how you define a feild in the framwork
[08:46:38] <NodeX> field
[08:49:42] <Signum> NodeX: I see. Thanks.
[08:50:24] <Signum> Seems like the new-style aggregation helps avoid map/reduce for some quite common cases. Plus I like the UNIX-style pipelines. :)
[08:50:30] <Signum> For some bogus reason m/r scares me.
[08:56:13] <NodeX> M/R also locks by thread iirc
[08:56:22] <NodeX> Aggregation Framework is great
[09:02:25] <Neptu__> hej just checking this for a while now.... how i can add an array to an array in c++ driver...
[09:45:12] <Neptu__> { one: 6, two: 7, RESULTS: { 0: [ "bar", "baz", "10" ], 1: [ "bar", "baz", "20" ], 2: [ "bar", "baz", "30" ] } }
[09:45:35] <Neptu__> this is not an array of arrays... this is an object containing arrays?
[10:21:18] <newbie_> i need someone to recommend a good mongodb gui for admin/querying/updating.
[10:21:28] <newbie_> please
[10:22:05] <newbie_> have tried some already but far from impressed. Monjadb my latest experiment which is also not exactly okay
[10:22:53] <meghan> newbie_ check out http://www.mongodb.org/display/DOCS/Admin+UIs
[10:23:46] <newbie_> meghan: which single one is very useful?
[10:24:18] <coalado> I like http://blog.mongovue.com/
[10:25:36] <meghan> newbie_ I have heard good things about genghis and mongovue so i might start with those
[10:26:08] <newbie_> okay. thanks.
[10:29:36] <newbie_> meghan: seems i tried it long time ago, had minor issues with the non-free part, and some other thing i cant remember. but thanks.
[10:30:09] <newbie_> need to make my mongo secure. it isnt secure by default and i can connect over it remotely easily without any security user/password asked
[10:49:32] <Cubud> NodeX : I think I have thought of a way to ensure my video ratings are up to date and accurate
[10:50:14] <Cubud> I have 2 members on "Video" called "StatisticsCalculatedUntil" and "LastStaticalChangeOn"
[10:50:32] <Cubud> When I add a rating I update video.LastStatisticalChangeOn and then insert the rating
[10:51:13] <Cubud> That way if the rating doesn't get saved a process will later run to select ratings between StatisticsCalculatedUntil and LastStatisticalChangeOn and simply find nothing to update
[10:52:14] <Cubud> If it does find data then I can do an atomic update on the Video updating both the statistics + StatisticsCalculatedUntil
[10:52:19] <Cubud> How does that sound?
[12:52:14] <Gargoyle> ping Derick
[14:05:11] <solars> hey, I (or my boss) came across http://blog.engineering.kiip.me/post/20988881092/a-year-with-mongodb - is there any discussion going on about this? I've recently read this anonymous blah article putting mongodb into bad light so I'm a bit curious
[14:11:59] <NodeX> bad light?
[14:12:20] <NodeX> the only thing that puts mongo in a mad light recently is idiotic users who dont know what they're doing
[14:14:13] <NodeX> the majority of "The bad" in that post have been addressed or are being addressed. Mongo cannot be held responsible for a developers poor choice of schema design and bad coding practices
[14:15:01] <rcmachado> I think the biggest error is using MongoDB as if it was a traditional database (like MySQL or Postgres).
[14:15:25] <NodeX> that is the number one epic fail for users of mongo
[14:18:22] <Derick> Gargoyle: pong
[14:18:40] <Gargoyle> Derick: Afternoon.
[14:18:58] <Gargoyle> Derick: What was that weather site you pointed me towards the other week?
[14:19:29] <solars> NodeX, I'm referring to this: http://www.readwriteweb.com/cloud/2011/11/hacker-news-and-the-damage-don.php
[14:20:32] <NodeX> I never read it
[14:20:43] <trepidacious> Anyone know whether it is possible to read/write plain JSON strings from/to a document?
[14:20:59] <trepidacious> From Scala *or Java(
[14:21:02] <NodeX> trepidacious : you haave to encode them
[14:21:12] <trepidacious> To BSON?
[14:21:13] <NodeX> escape *
[14:21:44] <trepidacious> Ah that's ok, what I have is definitely valid JSON
[14:21:52] <NodeX> http://pastebin.com/raw.php?i=FD3xe6Jt <---- this post lol what a joke
[14:22:20] <trepidacious> But all the examples I can see involve working with a representation of a JSON object in memory, then writing that
[14:22:48] <trepidacious> I already have code to serialise and deserialize my own data objects
[14:23:04] <NodeX> I dont know about your specific driver, I store Json fine using the php driver - I just have to escape it
[14:23:06] <Derick> Gargoyle: yr.no ?
[14:23:25] <trepidacious> Ah ok, this is specific to Casbah I guess
[14:23:26] <Gargoyle> That was it. Thanks!
[14:24:13] <fitzagard> solars: I believe the root of any proper architecture discussion is "does this thing fit our needs". If the answer is "maybe" then you experiment, test, verify, rinse and repeat the process.
[14:24:52] <NodeX> solars : Mongo never claims(ed) to write data deterministicly, it never claimed to do half of the stuff in that post lol
[14:25:10] <http402> solars: it depends on what you intend to use your data for
[14:25:23] <http402> all of the "negative" points in that kiip article are actually legit
[14:25:24] <NodeX> what it does -correctly- do (in my opinion) is take all of that important stuff and leave it in the hands of the developer
[14:25:31] <http402> as of at least 2.2 which im running a cluster of right now
[14:26:04] <NodeX> which results in speed and performance
[14:26:15] <http402> well except "global" lock is now down to "database" level lock i believe (not database, not collection) and i believe it's still a global write lock, just no longer global read locks which screw with replication
[14:26:29] <http402> *NOTE database, not collection
[14:26:36] <solars> NodeX, which post are you referring to
[14:26:45] <solars> thats why I'm asking if there are comments to it :)
[14:26:47] <NodeX> solars : the second one you posted
[14:26:59] <NodeX> and in part the kiib one
[14:27:03] <solars> NodeX, the second one is not relevant
[14:27:20] <NodeX> the developer is mad because he didn't think his dataset through first and had to migrate away
[14:27:39] <NodeX> or he is mad becuase his way of thinking cannot get the best out of a datastore
[14:27:42] <solars> the second link is basically about another post, that made up stuff to put mongo into bad light
[14:27:44] <solars> as I said
[14:27:47] <http402> NodeX: ive met a lot of developers that actually got mad because they Mongo doesn't do a good job of describing which queries don't work well dynamically
[14:28:01] <solars> that is why I posted the first link as I'm not sure about these 'facts' and wanted to ask if there is already a discussion or comments
[14:28:02] <NodeX> doesnt do a good job ?
[14:28:12] <http402> for example, indexing doesn't work across arrays
[14:28:12] <NodeX> isn't that what testing is for
[14:28:51] <http402> of course, you should test it out in your use case all of the way
[14:29:09] <NodeX> I've been using Mongo for a very long time in production, I have never had a problem (apart from early geo spatial queries) that I could not get around
[14:29:14] <http402> also, make sure that you keep your data set in ram as much as possible, it doesn't swap well, but thats not mongo-specific
[14:29:37] <NodeX> I have no performance quarms and i never have
[14:29:39] <http402> riak doesnt either and redis wont even let you exceed ram
[14:29:51] <NodeX> redis != mongo
[14:30:04] <NodeX> for that matter nor is riak
[14:30:27] <http402> nothing is mongo :-D not saying otherwise, im saying what alternatives fit other use cases and the choices
[14:30:34] <http402> mysql swaps pretty well relatively
[14:30:53] <http402> if youre data fits nicely into the mysql model, you'll probably be happier sticking with mysql
[14:30:53] <NodeX> Mongo doesn't claim to be a drop in replacement for anything
[14:31:05] <http402> NodeX: nobody is saying it is
[14:31:08] <NodeX> this guy makes out like it's the saviour of databases
[14:31:15] <NodeX> kiib guy is saying exactly that
[14:31:40] <NodeX> there is a reason I have not touched an RDBMS for close to 2 years - because I dont need to
[14:31:49] <aboudreault> NodeX, add a comment :P
[14:32:00] <NodeX> :P
[14:32:23] <NodeX> it annoys me when people who can't think through a problem blaim software
[14:32:52] <NodeX> blame*
[14:33:22] <aboudreault> NodeX, does he explain his use case in its blog? haven't get time to read it yet.
[14:33:54] <NodeX> 85 million documents for somehting or other
[14:34:28] <NodeX> doesn't really go into it
[14:34:30] <NodeX> Operations per second: 520 (Create, reads, updates, etc.)
[14:36:01] <aboudreault> k
[14:38:34] <topriddy1> NodeX: well it is a database and hence should act as such
[14:40:03] <NodeX> define how a database should act
[14:43:53] <topriddy1> NodeX: ACID.
[14:44:57] <NodeX> so every database in the world is ACID complient?
[14:45:02] <topriddy1> NodeX: also i would like integrity too though.
[14:45:20] <NodeX> you can't have all of them, you can have 2 of 3
[14:45:39] <fitzagard> NodeX: I believe that applies to CAP.
[14:45:44] <topriddy1> NodeX: i;m not an expert in these topic. apologies already. but have been vast in databases (specifically Oracle in the pas)
[14:45:50] <topriddy1> past*
[14:45:57] <NodeX> fitzagard : and ?
[14:46:09] <NodeX> I was talking about CAP ;)
[14:46:20] <fitzagard> +1
[14:46:26] <topriddy1> anyway Mongo just keeps giving me the feeling that it'll break any moment.
[14:46:34] <NodeX> topriddy1 : then dont use it
[14:46:48] <fitzagard> topriddy1: there are several who are using mongo in production
[14:46:50] <NodeX> if your data model does not fit into it then use somehting else
[14:47:19] <fitzagard> And we're not talking simple architecture
[14:47:37] <fitzagard> NodeX: +1 on CAP…sorry…thought you were talking ACID
[14:47:42] <NodeX> no m8 ;)
[14:47:51] <topriddy1> NodeX: my app db isnt exactly critical, so i'm experimenting with it for now. i just have few entities which don have major risk.
[14:47:51] <NodeX> ACID is somehting from the 60's !!
[14:48:01] <topriddy1> or few collections.
[14:48:28] <NodeX> define risk ?
[14:48:49] <NodeX> if you're after transactions think again - this will never fully happen unless you bake it in on the app level
[14:49:09] <topriddy1> NodeX: quick one, if i do db.users.save({name: "Paul"}); maybe i could still do: db.users.save({name: 32});
[14:49:13] <fitzagard> NodeX: you should blog about all this ;)
[14:49:16] <fitzagard> if you haven't already
[14:49:44] <topriddy1> NodeX: yeah you getting my point. MongoDB sorts of trust the application guy to do things right.
[14:49:46] <NodeX> topriddy1 : mongo is fire and forget, the latter will overwrite the former
[14:50:08] <topriddy1> NodeX: the latter would create a new record/entry actually.
[14:50:22] <NodeX> unless .... you have safe writes on in which case it will write to X nodes / Disks before returning
[14:50:39] <NodeX> yes sorry create
[14:51:29] <topriddy1> NodeX: well i am using one node. i am working on a chat app, where i need to store some info on the user in a db (mongo), also i post locations periodically. thats about it. can survive without a traditional db.
[14:51:37] <NodeX> [15:47:37] <topriddy1> NodeX: quick one, if i do db.users.save({name: "Paul"}); maybe i could still do: db.users.save({name: 32}); <--- I dont see the problem
[14:51:54] <NodeX> chat room app ?
[14:51:57] <topriddy1> NodeX: the problem is in TYPE information
[14:52:13] <NodeX> int's vs strings etc?
[14:52:30] <topriddy1> NodeX: yeah. reverse my example and you would see issue.,
[14:52:49] <NodeX> I dont see an issue because everything should always be cast in all languages
[14:53:06] <NodeX> even in RDBMS i would not trust the DB to work that out for me
[14:53:33] <topriddy1> NodeX: you cant store a string in a number datatype. it should throw an Exception. Mongo seems to accept this.
[14:54:01] <topriddy1> NodeX: also if you make a mistake with column-name, Mongo simply just creates a new column. (bug introduced this way)
[14:54:04] <NodeX> datatype?
[14:54:07] <topriddy1> so well its what it is.
[14:54:32] <NodeX> I think you're talking about your driver when you refer to "datatype"
[14:55:03] <topriddy1> NodeX: unfortunately so. Mongo doesnt have this info in itself about "datatypes"
[14:55:36] <NodeX> so what's throwing the Exception ?
[14:55:42] <topriddy1> NodeX: i'm also scared of what might happen if i mistakenly do a db.users.remove();// not sure yet if i have a rollback mechanism. i dont know if i can even do transactions, etc.
[14:56:01] <topriddy1> NodeX: I said it should throw the exception. (ideally)
[14:56:20] <NodeX> topriddy1 : mongo has NO schema ergo it shouldn't throw anything
[14:56:33] <http402> topriddy1: one safety mechanism other people have used is delayed replica set - keep one replica 15 minutes behind the others and you have a window of recovery
[14:56:39] <topriddy1> NodeX: i'm newbie to mongodb, still learning a lot though,
[14:56:42] <NodeX> Mongo has no transactions
[14:56:49] <NodeX> Mongo has no rollback
[14:57:05] <topriddy1> those are bleh for some kind of apps
[14:57:26] <NodeX> hence it doesnt fit all kinds of apps
[14:57:44] <topriddy1> NodeX: thanks for engaging me sir/ma. hopeful it fits my chat app.
[14:58:07] <NodeX> http://www.boneheadbeats.co.uk/radio <---- 60 minutes form start to finish on that chat room
[14:58:18] <NodeX> Mongo based, redis backed, Node.js for the transport
[14:58:53] <NodeX> infact not sure if it's even running lol
[14:58:59] <topriddy1> NodeX: i'm using xmpp. mobile based. just put certain info on mongo. java+morphia at the moment. things working fine.
[15:00:05] <topriddy1> NodeX: you wrote that i guess?
[15:01:50] <NodeX> yer, quick POC for a freinds radio station
[15:08:22] <mdeboard> Hi, when I added a member to a replset, that new member was suddenly primary. Now my former primary is in "rollback" status. My intent was to add this new member and start replicating the data but now ... am I losing all my data or what?
[15:09:27] <mdeboard> "We are ahead of the primary, trying to roll back."
[15:09:28] <mdeboard> What.
[15:09:59] <mdeboard> This isn't right, right? This isn't by design, surely.
[16:42:35] <timeturne> does anyone periodically determine if a user is considered "active" or not and save it i. the user's document?
[16:42:55] <NodeX> deinfe active?
[16:42:58] <NodeX> define*
[16:49:07] <timeturne> that's actually what I'm trying to figure out. basically the scenario is that user A creates a team. Users C, D, E, and F join that team. User A is not the team captain in real life, he/she just happened to be the first one to create the team. Some time later, User G shows up and requests User A to become the team captain because User G is the real team captain. If User A accepts then User A becomes just a team member or a
[16:49:08] <timeturne> co-captain. but I'mtrying to think about what should happen if User A doesn't respond to User G's request. thatwould basically mean that some other team member has to take User A's place and accept or decline User G's request.
[16:51:38] <NodeX> I do something similar in one of my apps - checking that a user has "Activated" - I use workers to occasaionaly check my user collection(s) for it
[16:52:20] <_m> We save a "last_active_at" timestamp periodically.
[16:53:02] <NodeX> ^^ I also (for different reasons) update tiemstamp every time someone does somethign to the record
[16:54:02] <_m> ^This. I think a fair standard are: created_at, updated_at, last_active_at, last_login_at
[16:54:39] <_m> At least, most applications I've authored have tracked at least those.
[16:54:43] <timeturne> and then use a formula which averages those values ^?
[16:54:52] <timeturne> to determine the status
[16:56:40] <_m> timeturne: If you save "last_active_at" you should have a singular value. An
[16:56:48] <NodeX> no need, just decide that no actions in X seconds = inactive
[16:56:55] <NodeX> +!
[16:56:56] <NodeX> +1
[16:58:15] <_m> Right. Sample rails code: If some_user['last_active_at'] < 1.month.ago # inactive
[16:58:25] <NodeX> I'll take that as a compliment as I don't know what "sagely" means lol
[17:20:24] <timeturne> awesome, thanks NodeX and _m
[17:25:20] <NodeX> good luck
[17:32:22] <Turicas> I can't load CSS of api.mongodb.org/wiki/current/... (I've tried with and without proxy). Is there any place where I can fill a bug?
[17:36:43] <timeturne> Turicas: I don't think there is any css on that page.
[17:38:55] <Turicas> timeturne, But I think this page should have more CSS
[17:39:14] <Turicas> timeturne, this page calls styles/site.css (it exists)
[17:39:31] <Turicas> timeturne, but site.css have some error messages in there, like "/* Could not locate resource: /includes/css/master.css */"
[17:39:54] <timeturne> yeah then whoever did the docs screwed up he markup
[17:40:10] <timeturne> the*
[17:41:25] <shadow_fox> hello m new to mongodb actually i know nothing i just downloaded the 32bit tgz file for my linuxmint13 OS
[17:41:55] <shadow_fox> but i am little confused what to move and where to /opt or /usr/local/bin
[17:43:04] <shadow_fox> .ask
[17:43:54] <shadow_fox> anyone can guide me ?
[17:48:44] <shadow_fox> ??
[17:50:18] <timeturne> usr/local/bin is fine
[17:50:46] <timeturne> mainly just follow the norms but in actuality it doesn't matter that much
[17:51:34] <shadow_fox> timeturne: should i move the extracted file to usr/local/bin ??
[17:51:39] <shadow_fox> or just the bin dir?
[17:51:55] <NodeX> Need a name for our quaint site persona ... anyone care to input ? -> http://www.jobbasket.co.uk/maintenance
[17:52:05] <timeturne> just put the whole mongodb folder in bin
[17:52:43] <shadow_fox> ok thanks timeturne:
[17:54:45] <rossdm> @NodeX Zombot, Frankenbot, Basket Bot
[17:58:46] <NodeX> nice one rossdm - much appreciate
[17:58:48] <NodeX> +d
[17:58:56] <NodeX> I like Basket Bot
[18:00:08] <rossdm> nice, good luck ;)
[18:01:26] <NodeX> ty ty;)
[18:05:41] <Turicas> timeturne_, do you know who is the maintainer of these docs? Probably there is an error on Sphinx doc compilation.
[18:15:15] <NodeX> I raised the issue a couple of days ago, nobody has responded yet
[18:47:12] <rcmachado> does anyone know some benchmark suite for Python drivers for MongoDB?
[18:47:54] <rcmachado> I want to test the performance of some drivers (asyncmongo), comparing it to the pymongo
[18:49:09] <rcmachado> Basically, to see (if|how much) the asynchronous really improves the performance
[18:49:20] <mids> rcmachado: also check out motor; http://emptysquare.net/motor/
[18:54:41] <rcmachado> mids: I saw it, I want to test it to :)
[18:55:29] <rcmachado> mids: *too
[18:56:04] <mids> https://github.com/ajdavis/chirp is implemented with a bunch of different mongodb drivers
[18:56:22] <mids> maybe you can use some web based load balancer on it
[18:56:27] <rcmachado> mids: but here where I work some guy is working on an implementation, more like asyncmongo
[18:56:54] <rcmachado> mids: and I want to test it too, to see if is really worth
[18:58:50] <rcmachado> mids: I will look at that! thanks!
[19:22:01] <aster1sk_> Hey mongodb friends. I'm playing with the new aggregate framework and struggling to work with arbitrary keys. I have a heavily nested document and need some direction.
[19:27:56] <aster1sk_> Say I had a.b.c.123.c (the 123 is a mysql id) - I'd like to aggregate group / aggregate on 'c'.
[19:28:40] <aster1sk_> I think unwind is what I need but I've just about run out of patience.
[19:38:31] <dstorrs> morning all. In a sharded cluster, I'm connected to a config server and I want to ask "are the config servers currently read-only?" what is the correct command?
[19:38:45] <crudson> aster1sk_: I assume the second 'c'; there are two. Can you create a paste of your document(s) and what you are executing?
[19:43:26] <whitaker> Howdy. Rails/Mongoid question here. I've recently upgraded my mongodb instance to v2.2.0, and my mongoid gem (under Rails v3.2.8) to v3.1.0. I have a model which includes Mongoid::Document, and defines a field as type Array. Before, attempts to assign this field with scalar values would properly throw a Mongoid::Errors::InvalidType error, but that behavior seems to have gone away. Now, type checking seems to have been disabled o
[19:43:27] <whitaker> fields declared as Array, such that anything can be assigned to the field. I've searched in vain for workarounds to restore type check validation; anyone else dealt with this behavior?
[19:49:19] <whitaker> <- thinking posting a query during San Francisco lunchtime not the best idea
[20:08:48] <kchodorow> dstorrs: like, "is one of you down and therefore the rest of you are readonly?"
[20:08:56] <dstorrs> exactly
[20:09:25] <dstorrs> or should I just look in a mongos log and see if the distrib lock pinger is responding?
[20:13:41] <kchodorow> i think the lock pinger is the best you're going to do, they don't "really" become read-only, mongos just stops trying to write to them
[20:17:10] <elliot-w> hi mongoids. We're maxing out file descriptors in our nodejs/mongodb(replicaset+gridfs) QA environment. My questions -- will these open descriptors be recycled when the offending process is restarted? Other than boosting the open descriptor limit in the OS to fix the issue... is there any way to catch this before it happens? I'm concerned that this will come up again in production.
[20:18:14] <whitaker> (Shorter version of question I asked before): has type check validation of Array field type in Mongoid::Document gem been recently disabled?
[20:18:35] <cyberd0m> Hi, I've asked in #node.js but maybe it's more pertinent to ask here. Basically, I can connect to mongo with the 'mongo' shell.. but not from node. I really have no idea what to try.. this should work.
[20:22:24] <dstorrs> kchodorow: ok, thanks
[20:25:22] <saml> hey, is there default rc file? ~/.mongodrc ?
[20:26:24] <aster1sk> mongod --config <path>
[20:26:27] <cyberd0m> if it can help, I can connect with mongojs but not mongodb (the node module)
[20:27:03] <_m> whitaker: Are your documents being stored as an array of scalars or … ?
[20:27:19] <elliot-w> @cyberd0m - I'm doing that with no problem. What errors are you seeing?
[20:27:41] <cyberd0m> elliot-w: I will paste a gist, one moment
[20:30:10] <cyberd0m> https://gist.github.com/325fad6f543a1afb4d2f
[20:31:13] <cyberd0m> I was trying to use it from mongoose.. i.e. require('mongoose').connect('mongodb://localhost/test').. but couldn't get it to work, so I've tried with 'mongodb' with no more success.
[20:39:08] <aster1sk> Lets pretend I have this : a.b.c.123.v
[20:39:10] <aster1sk> I want to $group : { 'views' : { '$sum' : 'a.b.c.<unknown>.v'\} }
[20:39:24] <aster1sk> 123 can be arbitrary.
[20:39:45] <bhosie> cyberd0m: this has worked for me: http://pastebin.com/abJgxknz
[20:40:35] <cyberd0m> bhosie: May I ask if the .connect work on your side?
[20:40:57] <aster1sk> M/R is out of the question (master / slave topology).
[20:40:58] <elliot-w> @cyberd0m - are you running mongod?
[20:41:15] <bhosie> haven't tried
[20:41:54] <cyberd0m> elliot-w: yes, I can connect from the shell command and using mongojs
[20:43:02] <cyberd0m> bhosie: trying your solution
[20:45:16] <cyberd0m> bhosie: it works!
[20:45:56] <bhosie> cyberd0m: :)
[20:46:10] <cyberd0m> maybe it's a bug with the .connect
[20:48:45] <aster1sk> So basically I'm trying to $sum a key within an unknown sub document.
[20:49:33] <aster1sk> There is little documentation I can find about this, I'm hoping it's possible -- I spent weeks on the data model.
[20:49:40] <cyberd0m> Because, here: http://mongoosejs.com, it says: mongoose.createConnection('localhost', 'test).. but just that is failing
[20:49:45] <DaveDev> is there a way to rebuild all the indexes from the command line (not in mongo shell)?
[20:50:04] <cyberd0m> but mongodb is clearly working as I've now connected to it from three different ways :-/
[20:50:27] <cyberd0m> sigh
[20:50:59] <cyberd0m> I'll ask in #mongoosejs
[20:53:01] <bhosie> cyberd0m: your error on .connect says you're missing a callback function. have you tried resolving that?
[20:53:43] <cyberd0m> bhosie: yes, just look at the line under it, I've written it with a callback
[21:11:34] <bhosie> cyberd0m: ah ok. hmmm why don't i even see .connect in the docs?
[21:11:47] <bhosie> http://mongodb.github.com/node-mongodb-native/api-generated/index.html
[21:13:38] <cyberd0m> bhosie: I think I've figured it outi.. it was something to do with my hosts file
[21:13:58] <cyberd0m> basically, it worked with 127.0.0.1 but not localhost
[21:14:19] <cyberd0m> because I had something else forwarding to localhost in my host file
[21:16:07] <cyberd0m> sorry for the trouble :-(
[21:18:05] <cyberd0m> thanks for your help
[21:18:46] <bhosie> np glad you figured it out
[21:20:03] <cyberd0m> : )
[21:20:04] <cyberd0m> cya
[21:43:56] <SeyelentEco> Hi all, anyone know of an easy way to test if queries are being sent to replication servers? I'm running PHP, I have 4 replica sets (1 offsite), setup a seperate environment on offsite box, but application is slow... seems as though it's querying primary.
[21:44:14] <SeyelentEco> I've set slaveokay for db connection
[21:56:17] <timeturner> is it worth it to shorten field names as much as possible in documents? possibly to single letters?
[22:02:18] <SeyelentEco> I've heard of people doing it (one presenter at MongoDB Seattle said his company did it)
[22:02:25] <SeyelentEco> saves on space for really large collections
[22:02:30] <wereHamster> timeturner: if the size of the keys contributes significantly to the overall data size, then you might consider it
[22:02:57] <wereHamster> otherwise stick to long names as it makes development easier.
[22:03:04] <timeturner> I wish mongo compressed the key values
[22:03:05] <wereHamster> .. and maintenance.
[22:03:10] <timeturner> I mean it's repeated so many times
[22:03:24] <wereHamster> some ODMs/driver can do that
[22:03:33] <timeturner> really?
[22:03:41] <timeturner> that would be awesome
[22:03:47] <wereHamster> shorten the fields, while still letting you access them by their long names.
[22:04:27] <timeturner> ah ha, https://github.com/ramiel/Alias-Field-Mongoose-plugin
[22:04:39] <timeturner> for node.js' mongoose ODM
[22:05:09] <wereHamster> last commit 8 months ago. Doubt it works with the latest mongoose
[22:06:03] <timeturner> yeah looks doubtful
[22:06:12] <timeturner> maybe I'll just use virtuals
[22:06:27] <timeturner> put them in my initMongoose js file or something
[22:06:30] <dbe> Hey everyone. I have records with a date field. I want to group based on "week buckets" which would be one group for every 7 days going backwards from today. Any idea on how to do this with the aggregation framework?
[22:06:46] <timeturner> actually I'll put them in my schema files individually
[22:19:43] <timeturner> wereHamster: any opinions on this https://github.com/tblobaum/mongoose-troop
[22:23:13] <niriven> hi, is there a opposite of $ne in mongo? or do i just use $nin[1]
[22:24:48] <niriven> nevermind :P
[22:55:41] <nemmeviu1> hi guys, you can help-me with a aggregate group by date, where my date has been written in a ISODate ?
[22:56:36] <nemmeviu1> i need a group by month+day+year